00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 1998 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3264 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.084 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.085 The recommended git tool is: git 00:00:00.085 using credential 00000000-0000-0000-0000-000000000002 00:00:00.086 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.115 Fetching changes from the remote Git repository 00:00:00.117 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.147 Using shallow fetch with depth 1 00:00:00.147 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.147 > git --version # timeout=10 00:00:00.171 > git --version # 'git version 2.39.2' 00:00:00.171 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.190 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.190 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.157 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.170 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.183 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:07.183 > git config core.sparsecheckout # timeout=10 00:00:07.194 > git read-tree -mu HEAD # timeout=10 00:00:07.213 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:07.233 Commit message: "inventory: add WCP3 to free inventory" 00:00:07.233 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:07.342 [Pipeline] Start of Pipeline 00:00:07.355 [Pipeline] library 00:00:07.356 Loading library shm_lib@master 00:00:07.356 Library shm_lib@master is cached. Copying from home. 00:00:07.372 [Pipeline] node 00:00:07.379 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.381 [Pipeline] { 00:00:07.390 [Pipeline] catchError 00:00:07.391 [Pipeline] { 00:00:07.402 [Pipeline] wrap 00:00:07.411 [Pipeline] { 00:00:07.417 [Pipeline] stage 00:00:07.418 [Pipeline] { (Prologue) 00:00:07.651 [Pipeline] sh 00:00:07.933 + logger -p user.info -t JENKINS-CI 00:00:07.951 [Pipeline] echo 00:00:07.952 Node: GP11 00:00:07.958 [Pipeline] sh 00:00:08.249 [Pipeline] setCustomBuildProperty 00:00:08.261 [Pipeline] echo 00:00:08.262 Cleanup processes 00:00:08.267 [Pipeline] sh 00:00:08.538 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.538 1716942 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.548 [Pipeline] sh 00:00:08.822 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.822 ++ grep -v 'sudo pgrep' 00:00:08.822 ++ awk '{print $1}' 00:00:08.822 + sudo kill -9 00:00:08.822 + true 00:00:08.834 [Pipeline] cleanWs 00:00:08.842 [WS-CLEANUP] Deleting project workspace... 00:00:08.842 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.848 [WS-CLEANUP] done 00:00:08.851 [Pipeline] setCustomBuildProperty 00:00:08.864 [Pipeline] sh 00:00:09.138 + sudo git config --global --replace-all safe.directory '*' 00:00:09.219 [Pipeline] httpRequest 00:00:09.259 [Pipeline] echo 00:00:09.261 Sorcerer 10.211.164.101 is alive 00:00:09.270 [Pipeline] httpRequest 00:00:09.274 HttpMethod: GET 00:00:09.275 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.276 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:09.289 Response Code: HTTP/1.1 200 OK 00:00:09.289 Success: Status code 200 is in the accepted range: 200,404 00:00:09.290 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:16.388 [Pipeline] sh 00:00:16.674 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:16.690 [Pipeline] httpRequest 00:00:16.725 [Pipeline] echo 00:00:16.726 Sorcerer 10.211.164.101 is alive 00:00:16.735 [Pipeline] httpRequest 00:00:16.740 HttpMethod: GET 00:00:16.740 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:16.741 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:16.760 Response Code: HTTP/1.1 200 OK 00:00:16.760 Success: Status code 200 is in the accepted range: 200,404 00:00:16.761 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:52.459 [Pipeline] sh 00:00:52.734 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:56.081 [Pipeline] sh 00:00:56.363 + git -C spdk log --oneline -n5 00:00:56.363 719d03c6a sock/uring: only register net impl if supported 00:00:56.363 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:56.363 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:56.363 6c7c1f57e accel: add sequence outstanding stat 00:00:56.363 3bc8e6a26 accel: add utility to put task 00:00:56.381 [Pipeline] withCredentials 00:00:56.392 > git --version # timeout=10 00:00:56.404 > git --version # 'git version 2.39.2' 00:00:56.418 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:56.420 [Pipeline] { 00:00:56.429 [Pipeline] retry 00:00:56.430 [Pipeline] { 00:00:56.446 [Pipeline] sh 00:00:56.720 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:00:56.991 [Pipeline] } 00:00:57.014 [Pipeline] // retry 00:00:57.019 [Pipeline] } 00:00:57.041 [Pipeline] // withCredentials 00:00:57.051 [Pipeline] httpRequest 00:00:57.070 [Pipeline] echo 00:00:57.072 Sorcerer 10.211.164.101 is alive 00:00:57.083 [Pipeline] httpRequest 00:00:57.087 HttpMethod: GET 00:00:57.088 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:57.089 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:00:57.090 Response Code: HTTP/1.1 200 OK 00:00:57.091 Success: Status code 200 is in the accepted range: 200,404 00:00:57.092 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:03.330 [Pipeline] sh 00:01:03.616 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:05.533 [Pipeline] sh 00:01:05.813 + git -C dpdk log --oneline -n5 00:01:05.814 caf0f5d395 version: 22.11.4 00:01:05.814 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:05.814 dc9c799c7d vhost: fix missing spinlock unlock 00:01:05.814 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:05.814 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:05.825 [Pipeline] } 00:01:05.843 [Pipeline] // stage 00:01:05.853 [Pipeline] stage 00:01:05.856 [Pipeline] { (Prepare) 00:01:05.880 [Pipeline] writeFile 00:01:05.898 [Pipeline] sh 00:01:06.178 + logger -p user.info -t JENKINS-CI 00:01:06.190 [Pipeline] sh 00:01:06.510 + logger -p user.info -t JENKINS-CI 00:01:06.520 [Pipeline] sh 00:01:06.796 + cat autorun-spdk.conf 00:01:06.796 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.796 SPDK_TEST_NVMF=1 00:01:06.796 SPDK_TEST_NVME_CLI=1 00:01:06.796 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:06.796 SPDK_TEST_NVMF_NICS=e810 00:01:06.796 SPDK_TEST_VFIOUSER=1 00:01:06.796 SPDK_RUN_UBSAN=1 00:01:06.796 NET_TYPE=phy 00:01:06.796 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:06.796 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:06.803 RUN_NIGHTLY=1 00:01:06.808 [Pipeline] readFile 00:01:06.835 [Pipeline] withEnv 00:01:06.837 [Pipeline] { 00:01:06.850 [Pipeline] sh 00:01:07.128 + set -ex 00:01:07.128 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:07.128 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:07.128 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:07.128 ++ SPDK_TEST_NVMF=1 00:01:07.128 ++ SPDK_TEST_NVME_CLI=1 00:01:07.128 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:07.128 ++ SPDK_TEST_NVMF_NICS=e810 00:01:07.128 ++ SPDK_TEST_VFIOUSER=1 00:01:07.128 ++ SPDK_RUN_UBSAN=1 00:01:07.128 ++ NET_TYPE=phy 00:01:07.128 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:07.128 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:07.128 ++ RUN_NIGHTLY=1 00:01:07.128 + case $SPDK_TEST_NVMF_NICS in 00:01:07.128 + DRIVERS=ice 00:01:07.128 + [[ tcp == \r\d\m\a ]] 00:01:07.128 + [[ -n ice ]] 00:01:07.128 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:07.128 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:07.128 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:07.128 rmmod: ERROR: Module irdma is not currently loaded 00:01:07.128 rmmod: ERROR: Module i40iw is not currently loaded 00:01:07.128 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:07.128 + true 00:01:07.128 + for D in $DRIVERS 00:01:07.128 + sudo modprobe ice 00:01:07.128 + exit 0 00:01:07.137 [Pipeline] } 00:01:07.154 [Pipeline] // withEnv 00:01:07.158 [Pipeline] } 00:01:07.178 [Pipeline] // stage 00:01:07.186 [Pipeline] catchError 00:01:07.187 [Pipeline] { 00:01:07.196 [Pipeline] timeout 00:01:07.197 Timeout set to expire in 50 min 00:01:07.198 [Pipeline] { 00:01:07.208 [Pipeline] stage 00:01:07.209 [Pipeline] { (Tests) 00:01:07.219 [Pipeline] sh 00:01:07.495 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:07.495 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:07.495 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:07.495 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:07.495 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:07.495 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:07.495 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:07.495 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:07.495 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:07.495 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:07.495 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:07.495 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:07.495 + source /etc/os-release 00:01:07.495 ++ NAME='Fedora Linux' 00:01:07.495 ++ VERSION='38 (Cloud Edition)' 00:01:07.495 ++ ID=fedora 00:01:07.495 ++ VERSION_ID=38 00:01:07.495 ++ VERSION_CODENAME= 00:01:07.495 ++ PLATFORM_ID=platform:f38 00:01:07.495 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:07.495 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:07.495 ++ LOGO=fedora-logo-icon 00:01:07.495 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:07.495 ++ HOME_URL=https://fedoraproject.org/ 00:01:07.495 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:07.495 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:07.495 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:07.495 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:07.495 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:07.495 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:07.495 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:07.495 ++ SUPPORT_END=2024-05-14 00:01:07.495 ++ VARIANT='Cloud Edition' 00:01:07.495 ++ VARIANT_ID=cloud 00:01:07.495 + uname -a 00:01:07.495 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:07.495 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:08.427 Hugepages 00:01:08.427 node hugesize free / total 00:01:08.427 node0 1048576kB 0 / 0 00:01:08.427 node0 2048kB 0 / 0 00:01:08.427 node1 1048576kB 0 / 0 00:01:08.427 node1 2048kB 0 / 0 00:01:08.427 00:01:08.427 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:08.427 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:08.427 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:08.427 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:08.427 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:08.428 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:08.428 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:08.428 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:08.428 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:08.428 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:08.428 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:08.428 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:08.428 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:08.428 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:08.428 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:08.428 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:08.428 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:08.428 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:08.428 + rm -f /tmp/spdk-ld-path 00:01:08.428 + source autorun-spdk.conf 00:01:08.428 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.428 ++ SPDK_TEST_NVMF=1 00:01:08.428 ++ SPDK_TEST_NVME_CLI=1 00:01:08.428 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:08.428 ++ SPDK_TEST_NVMF_NICS=e810 00:01:08.428 ++ SPDK_TEST_VFIOUSER=1 00:01:08.428 ++ SPDK_RUN_UBSAN=1 00:01:08.428 ++ NET_TYPE=phy 00:01:08.428 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:08.428 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:08.428 ++ RUN_NIGHTLY=1 00:01:08.428 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:08.428 + [[ -n '' ]] 00:01:08.428 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:08.428 + for M in /var/spdk/build-*-manifest.txt 00:01:08.428 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:08.428 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:08.428 + for M in /var/spdk/build-*-manifest.txt 00:01:08.428 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:08.428 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:08.428 ++ uname 00:01:08.428 + [[ Linux == \L\i\n\u\x ]] 00:01:08.428 + sudo dmesg -T 00:01:08.684 + sudo dmesg --clear 00:01:08.684 + dmesg_pid=1717680 00:01:08.684 + [[ Fedora Linux == FreeBSD ]] 00:01:08.684 + sudo dmesg -Tw 00:01:08.684 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:08.684 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:08.685 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:08.685 + [[ -x /usr/src/fio-static/fio ]] 00:01:08.685 + export FIO_BIN=/usr/src/fio-static/fio 00:01:08.685 + FIO_BIN=/usr/src/fio-static/fio 00:01:08.685 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:08.685 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:08.685 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:08.685 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:08.685 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:08.685 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:08.685 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:08.685 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:08.685 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:08.685 Test configuration: 00:01:08.685 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.685 SPDK_TEST_NVMF=1 00:01:08.685 SPDK_TEST_NVME_CLI=1 00:01:08.685 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:08.685 SPDK_TEST_NVMF_NICS=e810 00:01:08.685 SPDK_TEST_VFIOUSER=1 00:01:08.685 SPDK_RUN_UBSAN=1 00:01:08.685 NET_TYPE=phy 00:01:08.685 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:08.685 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:08.685 RUN_NIGHTLY=1 07:48:00 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:08.685 07:48:00 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:08.685 07:48:00 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:08.685 07:48:00 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:08.685 07:48:00 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.685 07:48:00 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.685 07:48:00 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.685 07:48:00 -- paths/export.sh@5 -- $ export PATH 00:01:08.685 07:48:00 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:08.685 07:48:00 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:08.685 07:48:00 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:08.685 07:48:00 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720849680.XXXXXX 00:01:08.685 07:48:00 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720849680.d5Gm2G 00:01:08.685 07:48:00 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:08.685 07:48:00 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:01:08.685 07:48:00 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:08.685 07:48:00 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:08.685 07:48:00 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:08.685 07:48:00 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:08.685 07:48:00 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:08.685 07:48:00 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:08.685 07:48:00 -- common/autotest_common.sh@10 -- $ set +x 00:01:08.685 07:48:00 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:08.685 07:48:00 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:08.685 07:48:00 -- pm/common@17 -- $ local monitor 00:01:08.685 07:48:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:08.685 07:48:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:08.685 07:48:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:08.685 07:48:00 -- pm/common@21 -- $ date +%s 00:01:08.685 07:48:00 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:08.685 07:48:00 -- pm/common@21 -- $ date +%s 00:01:08.685 07:48:00 -- pm/common@25 -- $ sleep 1 00:01:08.685 07:48:00 -- pm/common@21 -- $ date +%s 00:01:08.685 07:48:00 -- pm/common@21 -- $ date +%s 00:01:08.685 07:48:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720849680 00:01:08.685 07:48:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720849680 00:01:08.685 07:48:00 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720849680 00:01:08.685 07:48:00 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720849680 00:01:08.685 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720849680_collect-vmstat.pm.log 00:01:08.685 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720849680_collect-cpu-load.pm.log 00:01:08.685 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720849680_collect-cpu-temp.pm.log 00:01:08.685 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720849680_collect-bmc-pm.bmc.pm.log 00:01:09.618 07:48:01 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:09.618 07:48:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:09.618 07:48:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:09.618 07:48:01 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:09.618 07:48:01 -- spdk/autobuild.sh@16 -- $ date -u 00:01:09.618 Sat Jul 13 05:48:01 AM UTC 2024 00:01:09.618 07:48:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:09.618 v24.09-pre-202-g719d03c6a 00:01:09.618 07:48:01 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:09.618 07:48:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:09.618 07:48:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:09.618 07:48:01 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:09.618 07:48:01 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:09.618 07:48:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:09.618 ************************************ 00:01:09.618 START TEST ubsan 00:01:09.618 ************************************ 00:01:09.618 07:48:01 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:09.618 using ubsan 00:01:09.618 00:01:09.618 real 0m0.000s 00:01:09.618 user 0m0.000s 00:01:09.618 sys 0m0.000s 00:01:09.618 07:48:01 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:09.618 07:48:01 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:09.618 ************************************ 00:01:09.618 END TEST ubsan 00:01:09.618 ************************************ 00:01:09.618 07:48:01 -- common/autotest_common.sh@1142 -- $ return 0 00:01:09.618 07:48:01 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:09.618 07:48:01 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:09.618 07:48:01 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:09.618 07:48:01 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:09.618 07:48:01 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:09.618 07:48:01 -- common/autotest_common.sh@10 -- $ set +x 00:01:09.875 ************************************ 00:01:09.875 START TEST build_native_dpdk 00:01:09.875 ************************************ 00:01:09.875 07:48:01 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:09.875 07:48:01 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:09.875 07:48:01 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:09.875 07:48:01 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:09.875 07:48:01 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:09.876 caf0f5d395 version: 22.11.4 00:01:09.876 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:09.876 dc9c799c7d vhost: fix missing spinlock unlock 00:01:09.876 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:09.876 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 22 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@350 -- $ local d=22 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@352 -- $ echo 22 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=22 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:09.876 07:48:01 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:09.876 patching file config/rte_config.h 00:01:09.876 Hunk #1 succeeded at 60 (offset 1 line). 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:09.876 07:48:01 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:14.058 The Meson build system 00:01:14.058 Version: 1.3.1 00:01:14.058 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:14.058 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:14.058 Build type: native build 00:01:14.058 Program cat found: YES (/usr/bin/cat) 00:01:14.058 Project name: DPDK 00:01:14.058 Project version: 22.11.4 00:01:14.058 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:14.058 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:14.058 Host machine cpu family: x86_64 00:01:14.058 Host machine cpu: x86_64 00:01:14.058 Message: ## Building in Developer Mode ## 00:01:14.059 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:14.059 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:14.059 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:14.059 Program objdump found: YES (/usr/bin/objdump) 00:01:14.059 Program python3 found: YES (/usr/bin/python3) 00:01:14.059 Program cat found: YES (/usr/bin/cat) 00:01:14.059 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:14.059 Checking for size of "void *" : 8 00:01:14.059 Checking for size of "void *" : 8 (cached) 00:01:14.059 Library m found: YES 00:01:14.059 Library numa found: YES 00:01:14.059 Has header "numaif.h" : YES 00:01:14.059 Library fdt found: NO 00:01:14.059 Library execinfo found: NO 00:01:14.059 Has header "execinfo.h" : YES 00:01:14.059 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:14.059 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:14.059 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:14.059 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:14.059 Run-time dependency openssl found: YES 3.0.9 00:01:14.059 Run-time dependency libpcap found: YES 1.10.4 00:01:14.059 Has header "pcap.h" with dependency libpcap: YES 00:01:14.059 Compiler for C supports arguments -Wcast-qual: YES 00:01:14.059 Compiler for C supports arguments -Wdeprecated: YES 00:01:14.059 Compiler for C supports arguments -Wformat: YES 00:01:14.059 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:14.059 Compiler for C supports arguments -Wformat-security: NO 00:01:14.059 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:14.059 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:14.059 Compiler for C supports arguments -Wnested-externs: YES 00:01:14.059 Compiler for C supports arguments -Wold-style-definition: YES 00:01:14.059 Compiler for C supports arguments -Wpointer-arith: YES 00:01:14.059 Compiler for C supports arguments -Wsign-compare: YES 00:01:14.059 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:14.059 Compiler for C supports arguments -Wundef: YES 00:01:14.059 Compiler for C supports arguments -Wwrite-strings: YES 00:01:14.059 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:14.059 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:14.059 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:14.059 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:14.059 Compiler for C supports arguments -mavx512f: YES 00:01:14.059 Checking if "AVX512 checking" compiles: YES 00:01:14.059 Fetching value of define "__SSE4_2__" : 1 00:01:14.059 Fetching value of define "__AES__" : 1 00:01:14.059 Fetching value of define "__AVX__" : 1 00:01:14.059 Fetching value of define "__AVX2__" : (undefined) 00:01:14.059 Fetching value of define "__AVX512BW__" : (undefined) 00:01:14.059 Fetching value of define "__AVX512CD__" : (undefined) 00:01:14.059 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:14.059 Fetching value of define "__AVX512F__" : (undefined) 00:01:14.059 Fetching value of define "__AVX512VL__" : (undefined) 00:01:14.059 Fetching value of define "__PCLMUL__" : 1 00:01:14.059 Fetching value of define "__RDRND__" : 1 00:01:14.059 Fetching value of define "__RDSEED__" : (undefined) 00:01:14.059 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:14.059 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:14.059 Message: lib/kvargs: Defining dependency "kvargs" 00:01:14.059 Message: lib/telemetry: Defining dependency "telemetry" 00:01:14.059 Checking for function "getentropy" : YES 00:01:14.059 Message: lib/eal: Defining dependency "eal" 00:01:14.059 Message: lib/ring: Defining dependency "ring" 00:01:14.059 Message: lib/rcu: Defining dependency "rcu" 00:01:14.059 Message: lib/mempool: Defining dependency "mempool" 00:01:14.059 Message: lib/mbuf: Defining dependency "mbuf" 00:01:14.059 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:14.059 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:14.059 Compiler for C supports arguments -mpclmul: YES 00:01:14.059 Compiler for C supports arguments -maes: YES 00:01:14.059 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:14.059 Compiler for C supports arguments -mavx512bw: YES 00:01:14.059 Compiler for C supports arguments -mavx512dq: YES 00:01:14.059 Compiler for C supports arguments -mavx512vl: YES 00:01:14.059 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:14.059 Compiler for C supports arguments -mavx2: YES 00:01:14.059 Compiler for C supports arguments -mavx: YES 00:01:14.059 Message: lib/net: Defining dependency "net" 00:01:14.059 Message: lib/meter: Defining dependency "meter" 00:01:14.059 Message: lib/ethdev: Defining dependency "ethdev" 00:01:14.059 Message: lib/pci: Defining dependency "pci" 00:01:14.059 Message: lib/cmdline: Defining dependency "cmdline" 00:01:14.059 Message: lib/metrics: Defining dependency "metrics" 00:01:14.059 Message: lib/hash: Defining dependency "hash" 00:01:14.059 Message: lib/timer: Defining dependency "timer" 00:01:14.059 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:14.059 Compiler for C supports arguments -mavx2: YES (cached) 00:01:14.059 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:14.059 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:14.059 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:14.059 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:14.059 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:14.059 Message: lib/acl: Defining dependency "acl" 00:01:14.059 Message: lib/bbdev: Defining dependency "bbdev" 00:01:14.059 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:14.059 Run-time dependency libelf found: YES 0.190 00:01:14.059 Message: lib/bpf: Defining dependency "bpf" 00:01:14.059 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:14.059 Message: lib/compressdev: Defining dependency "compressdev" 00:01:14.059 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:14.059 Message: lib/distributor: Defining dependency "distributor" 00:01:14.059 Message: lib/efd: Defining dependency "efd" 00:01:14.059 Message: lib/eventdev: Defining dependency "eventdev" 00:01:14.059 Message: lib/gpudev: Defining dependency "gpudev" 00:01:14.059 Message: lib/gro: Defining dependency "gro" 00:01:14.059 Message: lib/gso: Defining dependency "gso" 00:01:14.059 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:14.059 Message: lib/jobstats: Defining dependency "jobstats" 00:01:14.059 Message: lib/latencystats: Defining dependency "latencystats" 00:01:14.059 Message: lib/lpm: Defining dependency "lpm" 00:01:14.059 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:14.059 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:14.059 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:14.059 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:14.059 Message: lib/member: Defining dependency "member" 00:01:14.059 Message: lib/pcapng: Defining dependency "pcapng" 00:01:14.059 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:14.059 Message: lib/power: Defining dependency "power" 00:01:14.059 Message: lib/rawdev: Defining dependency "rawdev" 00:01:14.059 Message: lib/regexdev: Defining dependency "regexdev" 00:01:14.059 Message: lib/dmadev: Defining dependency "dmadev" 00:01:14.059 Message: lib/rib: Defining dependency "rib" 00:01:14.059 Message: lib/reorder: Defining dependency "reorder" 00:01:14.059 Message: lib/sched: Defining dependency "sched" 00:01:14.059 Message: lib/security: Defining dependency "security" 00:01:14.059 Message: lib/stack: Defining dependency "stack" 00:01:14.059 Has header "linux/userfaultfd.h" : YES 00:01:14.059 Message: lib/vhost: Defining dependency "vhost" 00:01:14.059 Message: lib/ipsec: Defining dependency "ipsec" 00:01:14.059 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:14.059 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:14.059 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:14.059 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:14.059 Message: lib/fib: Defining dependency "fib" 00:01:14.059 Message: lib/port: Defining dependency "port" 00:01:14.059 Message: lib/pdump: Defining dependency "pdump" 00:01:14.059 Message: lib/table: Defining dependency "table" 00:01:14.059 Message: lib/pipeline: Defining dependency "pipeline" 00:01:14.059 Message: lib/graph: Defining dependency "graph" 00:01:14.059 Message: lib/node: Defining dependency "node" 00:01:14.059 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:14.059 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:14.059 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:14.059 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:14.059 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:14.059 Compiler for C supports arguments -Wno-unused-value: YES 00:01:14.995 Compiler for C supports arguments -Wno-format: YES 00:01:14.995 Compiler for C supports arguments -Wno-format-security: YES 00:01:14.995 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:14.995 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:14.995 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:14.995 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:14.995 Fetching value of define "__AVX2__" : (undefined) (cached) 00:01:14.995 Compiler for C supports arguments -mavx2: YES (cached) 00:01:14.995 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:14.995 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:14.995 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:14.995 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:14.995 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:14.995 Program doxygen found: YES (/usr/bin/doxygen) 00:01:14.995 Configuring doxy-api.conf using configuration 00:01:14.995 Program sphinx-build found: NO 00:01:14.995 Configuring rte_build_config.h using configuration 00:01:14.995 Message: 00:01:14.995 ================= 00:01:14.995 Applications Enabled 00:01:14.995 ================= 00:01:14.995 00:01:14.995 apps: 00:01:14.995 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:01:14.995 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:01:14.995 test-security-perf, 00:01:14.995 00:01:14.995 Message: 00:01:14.995 ================= 00:01:14.995 Libraries Enabled 00:01:14.995 ================= 00:01:14.995 00:01:14.995 libs: 00:01:14.995 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:01:14.995 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:01:14.995 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:01:14.995 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:01:14.995 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:01:14.995 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:01:14.995 table, pipeline, graph, node, 00:01:14.995 00:01:14.995 Message: 00:01:14.995 =============== 00:01:14.995 Drivers Enabled 00:01:14.995 =============== 00:01:14.995 00:01:14.995 common: 00:01:14.995 00:01:14.995 bus: 00:01:14.995 pci, vdev, 00:01:14.995 mempool: 00:01:14.995 ring, 00:01:14.995 dma: 00:01:14.995 00:01:14.995 net: 00:01:14.995 i40e, 00:01:14.995 raw: 00:01:14.995 00:01:14.995 crypto: 00:01:14.995 00:01:14.995 compress: 00:01:14.995 00:01:14.995 regex: 00:01:14.995 00:01:14.995 vdpa: 00:01:14.995 00:01:14.995 event: 00:01:14.995 00:01:14.995 baseband: 00:01:14.995 00:01:14.995 gpu: 00:01:14.995 00:01:14.995 00:01:14.995 Message: 00:01:14.995 ================= 00:01:14.995 Content Skipped 00:01:14.995 ================= 00:01:14.995 00:01:14.995 apps: 00:01:14.995 00:01:14.995 libs: 00:01:14.995 kni: explicitly disabled via build config (deprecated lib) 00:01:14.995 flow_classify: explicitly disabled via build config (deprecated lib) 00:01:14.995 00:01:14.995 drivers: 00:01:14.995 common/cpt: not in enabled drivers build config 00:01:14.995 common/dpaax: not in enabled drivers build config 00:01:14.995 common/iavf: not in enabled drivers build config 00:01:14.995 common/idpf: not in enabled drivers build config 00:01:14.995 common/mvep: not in enabled drivers build config 00:01:14.995 common/octeontx: not in enabled drivers build config 00:01:14.995 bus/auxiliary: not in enabled drivers build config 00:01:14.995 bus/dpaa: not in enabled drivers build config 00:01:14.995 bus/fslmc: not in enabled drivers build config 00:01:14.995 bus/ifpga: not in enabled drivers build config 00:01:14.995 bus/vmbus: not in enabled drivers build config 00:01:14.995 common/cnxk: not in enabled drivers build config 00:01:14.995 common/mlx5: not in enabled drivers build config 00:01:14.995 common/qat: not in enabled drivers build config 00:01:14.995 common/sfc_efx: not in enabled drivers build config 00:01:14.995 mempool/bucket: not in enabled drivers build config 00:01:14.995 mempool/cnxk: not in enabled drivers build config 00:01:14.995 mempool/dpaa: not in enabled drivers build config 00:01:14.995 mempool/dpaa2: not in enabled drivers build config 00:01:14.995 mempool/octeontx: not in enabled drivers build config 00:01:14.995 mempool/stack: not in enabled drivers build config 00:01:14.995 dma/cnxk: not in enabled drivers build config 00:01:14.995 dma/dpaa: not in enabled drivers build config 00:01:14.995 dma/dpaa2: not in enabled drivers build config 00:01:14.995 dma/hisilicon: not in enabled drivers build config 00:01:14.995 dma/idxd: not in enabled drivers build config 00:01:14.995 dma/ioat: not in enabled drivers build config 00:01:14.995 dma/skeleton: not in enabled drivers build config 00:01:14.996 net/af_packet: not in enabled drivers build config 00:01:14.996 net/af_xdp: not in enabled drivers build config 00:01:14.996 net/ark: not in enabled drivers build config 00:01:14.996 net/atlantic: not in enabled drivers build config 00:01:14.996 net/avp: not in enabled drivers build config 00:01:14.996 net/axgbe: not in enabled drivers build config 00:01:14.996 net/bnx2x: not in enabled drivers build config 00:01:14.996 net/bnxt: not in enabled drivers build config 00:01:14.996 net/bonding: not in enabled drivers build config 00:01:14.996 net/cnxk: not in enabled drivers build config 00:01:14.996 net/cxgbe: not in enabled drivers build config 00:01:14.996 net/dpaa: not in enabled drivers build config 00:01:14.996 net/dpaa2: not in enabled drivers build config 00:01:14.996 net/e1000: not in enabled drivers build config 00:01:14.996 net/ena: not in enabled drivers build config 00:01:14.996 net/enetc: not in enabled drivers build config 00:01:14.996 net/enetfec: not in enabled drivers build config 00:01:14.996 net/enic: not in enabled drivers build config 00:01:14.996 net/failsafe: not in enabled drivers build config 00:01:14.996 net/fm10k: not in enabled drivers build config 00:01:14.996 net/gve: not in enabled drivers build config 00:01:14.996 net/hinic: not in enabled drivers build config 00:01:14.996 net/hns3: not in enabled drivers build config 00:01:14.996 net/iavf: not in enabled drivers build config 00:01:14.996 net/ice: not in enabled drivers build config 00:01:14.996 net/idpf: not in enabled drivers build config 00:01:14.996 net/igc: not in enabled drivers build config 00:01:14.996 net/ionic: not in enabled drivers build config 00:01:14.996 net/ipn3ke: not in enabled drivers build config 00:01:14.996 net/ixgbe: not in enabled drivers build config 00:01:14.996 net/kni: not in enabled drivers build config 00:01:14.996 net/liquidio: not in enabled drivers build config 00:01:14.996 net/mana: not in enabled drivers build config 00:01:14.996 net/memif: not in enabled drivers build config 00:01:14.996 net/mlx4: not in enabled drivers build config 00:01:14.996 net/mlx5: not in enabled drivers build config 00:01:14.996 net/mvneta: not in enabled drivers build config 00:01:14.996 net/mvpp2: not in enabled drivers build config 00:01:14.996 net/netvsc: not in enabled drivers build config 00:01:14.996 net/nfb: not in enabled drivers build config 00:01:14.996 net/nfp: not in enabled drivers build config 00:01:14.996 net/ngbe: not in enabled drivers build config 00:01:14.996 net/null: not in enabled drivers build config 00:01:14.996 net/octeontx: not in enabled drivers build config 00:01:14.996 net/octeon_ep: not in enabled drivers build config 00:01:14.996 net/pcap: not in enabled drivers build config 00:01:14.996 net/pfe: not in enabled drivers build config 00:01:14.996 net/qede: not in enabled drivers build config 00:01:14.996 net/ring: not in enabled drivers build config 00:01:14.996 net/sfc: not in enabled drivers build config 00:01:14.996 net/softnic: not in enabled drivers build config 00:01:14.996 net/tap: not in enabled drivers build config 00:01:14.996 net/thunderx: not in enabled drivers build config 00:01:14.996 net/txgbe: not in enabled drivers build config 00:01:14.996 net/vdev_netvsc: not in enabled drivers build config 00:01:14.996 net/vhost: not in enabled drivers build config 00:01:14.996 net/virtio: not in enabled drivers build config 00:01:14.996 net/vmxnet3: not in enabled drivers build config 00:01:14.996 raw/cnxk_bphy: not in enabled drivers build config 00:01:14.996 raw/cnxk_gpio: not in enabled drivers build config 00:01:14.996 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:14.996 raw/ifpga: not in enabled drivers build config 00:01:14.996 raw/ntb: not in enabled drivers build config 00:01:14.996 raw/skeleton: not in enabled drivers build config 00:01:14.996 crypto/armv8: not in enabled drivers build config 00:01:14.996 crypto/bcmfs: not in enabled drivers build config 00:01:14.996 crypto/caam_jr: not in enabled drivers build config 00:01:14.996 crypto/ccp: not in enabled drivers build config 00:01:14.996 crypto/cnxk: not in enabled drivers build config 00:01:14.996 crypto/dpaa_sec: not in enabled drivers build config 00:01:14.996 crypto/dpaa2_sec: not in enabled drivers build config 00:01:14.996 crypto/ipsec_mb: not in enabled drivers build config 00:01:14.996 crypto/mlx5: not in enabled drivers build config 00:01:14.996 crypto/mvsam: not in enabled drivers build config 00:01:14.996 crypto/nitrox: not in enabled drivers build config 00:01:14.996 crypto/null: not in enabled drivers build config 00:01:14.996 crypto/octeontx: not in enabled drivers build config 00:01:14.996 crypto/openssl: not in enabled drivers build config 00:01:14.996 crypto/scheduler: not in enabled drivers build config 00:01:14.996 crypto/uadk: not in enabled drivers build config 00:01:14.996 crypto/virtio: not in enabled drivers build config 00:01:14.996 compress/isal: not in enabled drivers build config 00:01:14.996 compress/mlx5: not in enabled drivers build config 00:01:14.996 compress/octeontx: not in enabled drivers build config 00:01:14.996 compress/zlib: not in enabled drivers build config 00:01:14.996 regex/mlx5: not in enabled drivers build config 00:01:14.996 regex/cn9k: not in enabled drivers build config 00:01:14.996 vdpa/ifc: not in enabled drivers build config 00:01:14.996 vdpa/mlx5: not in enabled drivers build config 00:01:14.996 vdpa/sfc: not in enabled drivers build config 00:01:14.996 event/cnxk: not in enabled drivers build config 00:01:14.996 event/dlb2: not in enabled drivers build config 00:01:14.996 event/dpaa: not in enabled drivers build config 00:01:14.996 event/dpaa2: not in enabled drivers build config 00:01:14.996 event/dsw: not in enabled drivers build config 00:01:14.996 event/opdl: not in enabled drivers build config 00:01:14.996 event/skeleton: not in enabled drivers build config 00:01:14.996 event/sw: not in enabled drivers build config 00:01:14.996 event/octeontx: not in enabled drivers build config 00:01:14.996 baseband/acc: not in enabled drivers build config 00:01:14.996 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:14.996 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:14.996 baseband/la12xx: not in enabled drivers build config 00:01:14.996 baseband/null: not in enabled drivers build config 00:01:14.996 baseband/turbo_sw: not in enabled drivers build config 00:01:14.996 gpu/cuda: not in enabled drivers build config 00:01:14.996 00:01:14.996 00:01:14.996 Build targets in project: 316 00:01:14.996 00:01:14.996 DPDK 22.11.4 00:01:14.996 00:01:14.996 User defined options 00:01:14.996 libdir : lib 00:01:14.996 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:14.996 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:14.996 c_link_args : 00:01:14.996 enable_docs : false 00:01:14.996 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:14.996 enable_kmods : false 00:01:14.996 machine : native 00:01:14.996 tests : false 00:01:14.996 00:01:14.996 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:14.996 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:14.996 07:48:06 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:15.257 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:15.257 [1/745] Generating lib/rte_telemetry_mingw with a custom command 00:01:15.257 [2/745] Generating lib/rte_kvargs_def with a custom command 00:01:15.257 [3/745] Generating lib/rte_kvargs_mingw with a custom command 00:01:15.257 [4/745] Generating lib/rte_telemetry_def with a custom command 00:01:15.257 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:15.257 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:15.257 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:15.257 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:15.257 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:15.257 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:15.257 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:15.257 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:15.257 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:15.257 [14/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:15.257 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:15.257 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:15.257 [17/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:15.257 [18/745] Linking static target lib/librte_kvargs.a 00:01:15.522 [19/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:15.522 [20/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:15.522 [21/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:15.522 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:15.522 [23/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:15.522 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:15.522 [25/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:15.522 [26/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:15.522 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:15.522 [28/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:15.522 [29/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:15.522 [30/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:15.522 [31/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:15.522 [32/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:15.522 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:15.522 [34/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:01:15.522 [35/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:15.522 [36/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:15.522 [37/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:15.522 [38/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:15.522 [39/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:15.522 [40/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:15.522 [41/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:15.522 [42/745] Generating lib/rte_eal_def with a custom command 00:01:15.522 [43/745] Generating lib/rte_eal_mingw with a custom command 00:01:15.522 [44/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:15.522 [45/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:15.522 [46/745] Generating lib/rte_ring_mingw with a custom command 00:01:15.522 [47/745] Generating lib/rte_ring_def with a custom command 00:01:15.522 [48/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:15.522 [49/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:15.522 [50/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:15.522 [51/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:15.522 [52/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:15.522 [53/745] Generating lib/rte_rcu_mingw with a custom command 00:01:15.522 [54/745] Generating lib/rte_rcu_def with a custom command 00:01:15.522 [55/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:15.522 [56/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:15.522 [57/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:15.522 [58/745] Generating lib/rte_mempool_def with a custom command 00:01:15.522 [59/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:15.522 [60/745] Generating lib/rte_mempool_mingw with a custom command 00:01:15.522 [61/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:15.522 [62/745] Generating lib/rte_mbuf_def with a custom command 00:01:15.522 [63/745] Generating lib/rte_mbuf_mingw with a custom command 00:01:15.522 [64/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:01:15.522 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:15.522 [66/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:15.522 [67/745] Generating lib/rte_net_def with a custom command 00:01:15.522 [68/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:15.784 [69/745] Generating lib/rte_meter_mingw with a custom command 00:01:15.784 [70/745] Generating lib/rte_meter_def with a custom command 00:01:15.784 [71/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:15.784 [72/745] Generating lib/rte_net_mingw with a custom command 00:01:15.784 [73/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:15.784 [74/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:15.784 [75/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:15.784 [76/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:15.784 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:15.784 [78/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:15.784 [79/745] Generating lib/rte_ethdev_def with a custom command 00:01:15.784 [80/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:15.784 [81/745] Linking static target lib/librte_ring.a 00:01:15.784 [82/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:15.784 [83/745] Linking target lib/librte_kvargs.so.23.0 00:01:15.784 [84/745] Generating lib/rte_ethdev_mingw with a custom command 00:01:15.784 [85/745] Generating lib/rte_pci_def with a custom command 00:01:15.784 [86/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:15.784 [87/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:16.050 [88/745] Linking static target lib/librte_meter.a 00:01:16.050 [89/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:16.050 [90/745] Generating lib/rte_pci_mingw with a custom command 00:01:16.050 [91/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:16.050 [92/745] Linking static target lib/librte_pci.a 00:01:16.050 [93/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:16.050 [94/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:01:16.050 [95/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:16.050 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:16.050 [97/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:16.050 [98/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:16.314 [99/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.314 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:16.314 [101/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:16.314 [102/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.314 [103/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:16.314 [104/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:16.314 [105/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.314 [106/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:16.314 [107/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:16.314 [108/745] Generating lib/rte_cmdline_def with a custom command 00:01:16.314 [109/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:16.314 [110/745] Linking static target lib/librte_telemetry.a 00:01:16.314 [111/745] Generating lib/rte_cmdline_mingw with a custom command 00:01:16.314 [112/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:16.314 [113/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:16.314 [114/745] Generating lib/rte_metrics_def with a custom command 00:01:16.314 [115/745] Generating lib/rte_metrics_mingw with a custom command 00:01:16.314 [116/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:16.314 [117/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:16.314 [118/745] Generating lib/rte_hash_def with a custom command 00:01:16.314 [119/745] Generating lib/rte_hash_mingw with a custom command 00:01:16.314 [120/745] Generating lib/rte_timer_def with a custom command 00:01:16.579 [121/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:16.579 [122/745] Generating lib/rte_timer_mingw with a custom command 00:01:16.579 [123/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:16.579 [124/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:16.579 [125/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:16.840 [126/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:16.840 [127/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:16.840 [128/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:16.840 [129/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:16.840 [130/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:16.840 [131/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:16.840 [132/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:16.840 [133/745] Generating lib/rte_acl_def with a custom command 00:01:16.840 [134/745] Generating lib/rte_acl_mingw with a custom command 00:01:16.840 [135/745] Generating lib/rte_bbdev_def with a custom command 00:01:16.840 [136/745] Generating lib/rte_bbdev_mingw with a custom command 00:01:16.840 [137/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:16.840 [138/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:16.840 [139/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:16.840 [140/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:16.840 [141/745] Generating lib/rte_bitratestats_def with a custom command 00:01:16.840 [142/745] Generating lib/rte_bitratestats_mingw with a custom command 00:01:16.840 [143/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:16.840 [144/745] Linking target lib/librte_telemetry.so.23.0 00:01:16.840 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:17.102 [146/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:17.102 [147/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:17.102 [148/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:17.102 [149/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:17.102 [150/745] Generating lib/rte_bpf_mingw with a custom command 00:01:17.102 [151/745] Generating lib/rte_bpf_def with a custom command 00:01:17.102 [152/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:17.102 [153/745] Generating lib/rte_cfgfile_mingw with a custom command 00:01:17.102 [154/745] Generating lib/rte_cfgfile_def with a custom command 00:01:17.102 [155/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:17.102 [156/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:17.102 [157/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:17.102 [158/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:01:17.102 [159/745] Generating lib/rte_compressdev_def with a custom command 00:01:17.102 [160/745] Generating lib/rte_compressdev_mingw with a custom command 00:01:17.102 [161/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:17.361 [162/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:17.361 [163/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:17.361 [164/745] Generating lib/rte_cryptodev_mingw with a custom command 00:01:17.361 [165/745] Generating lib/rte_cryptodev_def with a custom command 00:01:17.361 [166/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:17.361 [167/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:17.361 [168/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:17.361 [169/745] Linking static target lib/librte_timer.a 00:01:17.361 [170/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:17.361 [171/745] Linking static target lib/librte_cmdline.a 00:01:17.361 [172/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:17.361 [173/745] Linking static target lib/librte_rcu.a 00:01:17.361 [174/745] Generating lib/rte_distributor_def with a custom command 00:01:17.361 [175/745] Generating lib/rte_distributor_mingw with a custom command 00:01:17.361 [176/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:17.361 [177/745] Generating lib/rte_efd_def with a custom command 00:01:17.361 [178/745] Generating lib/rte_efd_mingw with a custom command 00:01:17.361 [179/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:17.361 [180/745] Linking static target lib/librte_net.a 00:01:17.624 [181/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:17.624 [182/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:17.624 [183/745] Linking static target lib/librte_metrics.a 00:01:17.624 [184/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:17.624 [185/745] Linking static target lib/librte_cfgfile.a 00:01:17.624 [186/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:17.624 [187/745] Linking static target lib/librte_mempool.a 00:01:17.624 [188/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:17.911 [189/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.911 [190/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.911 [191/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:17.911 [192/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:17.911 [193/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:17.911 [194/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:17.911 [195/745] Linking static target lib/librte_eal.a 00:01:17.911 [196/745] Generating lib/rte_eventdev_def with a custom command 00:01:17.911 [197/745] Generating lib/rte_eventdev_mingw with a custom command 00:01:18.175 [198/745] Generating lib/rte_gpudev_def with a custom command 00:01:18.175 [199/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:18.175 [200/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:18.175 [201/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.175 [202/745] Generating lib/rte_gpudev_mingw with a custom command 00:01:18.175 [203/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:18.175 [204/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:18.175 [205/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:18.175 [206/745] Linking static target lib/librte_bitratestats.a 00:01:18.175 [207/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:18.175 [208/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.175 [209/745] Generating lib/rte_gro_def with a custom command 00:01:18.175 [210/745] Generating lib/rte_gro_mingw with a custom command 00:01:18.175 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:18.437 [212/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:18.437 [213/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:18.437 [214/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:18.437 [215/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:18.437 [216/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:18.437 [217/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.437 [218/745] Generating lib/rte_gso_def with a custom command 00:01:18.699 [219/745] Generating lib/rte_gso_mingw with a custom command 00:01:18.699 [220/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:18.699 [221/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:18.699 [222/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:18.699 [223/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:18.699 [224/745] Linking static target lib/librte_bbdev.a 00:01:18.699 [225/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.699 [226/745] Generating lib/rte_ip_frag_def with a custom command 00:01:18.699 [227/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:18.699 [228/745] Generating lib/rte_ip_frag_mingw with a custom command 00:01:18.699 [229/745] Generating lib/rte_jobstats_def with a custom command 00:01:18.699 [230/745] Generating lib/rte_jobstats_mingw with a custom command 00:01:18.699 [231/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:18.961 [232/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:18.961 [233/745] Generating lib/rte_latencystats_def with a custom command 00:01:18.961 [234/745] Generating lib/rte_latencystats_mingw with a custom command 00:01:18.961 [235/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:18.961 [236/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:18.961 [237/745] Linking static target lib/librte_compressdev.a 00:01:18.961 [238/745] Generating lib/rte_lpm_def with a custom command 00:01:18.961 [239/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:18.961 [240/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:18.961 [241/745] Generating lib/rte_lpm_mingw with a custom command 00:01:18.961 [242/745] Linking static target lib/librte_jobstats.a 00:01:19.227 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:19.227 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:19.227 [245/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:19.227 [246/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:19.227 [247/745] Linking static target lib/librte_distributor.a 00:01:19.227 [248/745] Generating lib/rte_member_def with a custom command 00:01:19.487 [249/745] Generating lib/rte_member_mingw with a custom command 00:01:19.487 [250/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:19.487 [251/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:19.487 [252/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.487 [253/745] Generating lib/rte_pcapng_def with a custom command 00:01:19.487 [254/745] Generating lib/rte_pcapng_mingw with a custom command 00:01:19.487 [255/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.487 [256/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:19.487 [257/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:19.751 [258/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:19.751 [259/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:19.751 [260/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:19.751 [261/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:19.751 [262/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:19.751 [263/745] Linking static target lib/librte_bpf.a 00:01:19.751 [264/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:19.751 [265/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:19.751 [266/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:19.751 [267/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:19.751 [268/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:19.751 [269/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:19.751 [270/745] Linking static target lib/librte_gpudev.a 00:01:19.751 [271/745] Generating lib/rte_power_def with a custom command 00:01:19.751 [272/745] Generating lib/rte_power_mingw with a custom command 00:01:19.751 [273/745] Generating lib/rte_rawdev_def with a custom command 00:01:19.751 [274/745] Generating lib/rte_rawdev_mingw with a custom command 00:01:19.751 [275/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:19.751 [276/745] Generating lib/rte_regexdev_def with a custom command 00:01:19.751 [277/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:19.751 [278/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:19.751 [279/745] Linking static target lib/librte_gro.a 00:01:20.010 [280/745] Generating lib/rte_regexdev_mingw with a custom command 00:01:20.010 [281/745] Generating lib/rte_dmadev_mingw with a custom command 00:01:20.010 [282/745] Generating lib/rte_dmadev_def with a custom command 00:01:20.010 [283/745] Generating lib/rte_rib_def with a custom command 00:01:20.010 [284/745] Generating lib/rte_rib_mingw with a custom command 00:01:20.010 [285/745] Generating lib/rte_reorder_def with a custom command 00:01:20.010 [286/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:01:20.010 [287/745] Generating lib/rte_reorder_mingw with a custom command 00:01:20.010 [288/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:20.273 [289/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.273 [290/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.273 [291/745] Generating lib/rte_sched_def with a custom command 00:01:20.273 [292/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:20.273 [293/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:20.273 [294/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:20.273 [295/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:20.273 [296/745] Generating lib/rte_sched_mingw with a custom command 00:01:20.273 [297/745] Linking static target lib/librte_latencystats.a 00:01:20.273 [298/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:20.273 [299/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:20.273 [300/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.273 [301/745] Generating lib/rte_security_def with a custom command 00:01:20.273 [302/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:20.273 [303/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:20.273 [304/745] Generating lib/rte_security_mingw with a custom command 00:01:20.273 [305/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:20.273 [306/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:20.273 [307/745] Generating lib/rte_stack_def with a custom command 00:01:20.536 [308/745] Generating lib/rte_stack_mingw with a custom command 00:01:20.536 [309/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:20.536 [310/745] Linking static target lib/librte_rawdev.a 00:01:20.536 [311/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:20.536 [312/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:20.536 [313/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:20.536 [314/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:20.536 [315/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:20.536 [316/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:20.536 [317/745] Linking static target lib/librte_stack.a 00:01:20.536 [318/745] Generating lib/rte_vhost_def with a custom command 00:01:20.536 [319/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:20.536 [320/745] Generating lib/rte_vhost_mingw with a custom command 00:01:20.536 [321/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:20.536 [322/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:20.536 [323/745] Linking static target lib/librte_dmadev.a 00:01:20.536 [324/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:20.795 [325/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:20.795 [326/745] Linking static target lib/librte_ip_frag.a 00:01:20.795 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:20.795 [328/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:20.795 [329/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:20.795 [330/745] Generating lib/rte_ipsec_def with a custom command 00:01:20.795 [331/745] Generating lib/rte_ipsec_mingw with a custom command 00:01:20.795 [332/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.058 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:01:21.058 [334/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.058 [335/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:21.316 [336/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:21.316 [337/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.316 [338/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:21.316 [339/745] Generating lib/rte_fib_def with a custom command 00:01:21.316 [340/745] Linking static target lib/librte_gso.a 00:01:21.316 [341/745] Generating lib/rte_fib_mingw with a custom command 00:01:21.316 [342/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.316 [343/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:21.316 [344/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:21.316 [345/745] Linking static target lib/librte_regexdev.a 00:01:21.578 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.578 [347/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:21.578 [348/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:21.578 [349/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:21.578 [350/745] Linking static target lib/librte_efd.a 00:01:21.578 [351/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:21.578 [352/745] Linking static target lib/librte_pcapng.a 00:01:21.835 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:21.835 [354/745] Linking static target lib/librte_lpm.a 00:01:21.835 [355/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:21.835 [356/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:21.835 [357/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:21.835 [358/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:21.835 [359/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:21.835 [360/745] Linking static target lib/librte_reorder.a 00:01:22.099 [361/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:22.099 [362/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:22.099 [363/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.099 [364/745] Generating lib/rte_port_def with a custom command 00:01:22.099 [365/745] Generating lib/rte_port_mingw with a custom command 00:01:22.099 [366/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:22.099 [367/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:22.099 [368/745] Generating lib/rte_pdump_def with a custom command 00:01:22.099 [369/745] Linking static target lib/acl/libavx2_tmp.a 00:01:22.099 [370/745] Generating lib/rte_pdump_mingw with a custom command 00:01:22.099 [371/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.099 [372/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:22.099 [373/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:22.099 [374/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:22.099 [375/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:22.099 [376/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:22.099 [377/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:22.360 [378/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:22.361 [379/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:22.361 [380/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.361 [381/745] Linking static target lib/librte_security.a 00:01:22.361 [382/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.361 [383/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:22.361 [384/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:22.361 [385/745] Linking static target lib/librte_power.a 00:01:22.361 [386/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:22.361 [387/745] Linking static target lib/librte_hash.a 00:01:22.361 [388/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:22.624 [389/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:22.624 [390/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.624 [391/745] Linking static target lib/librte_rib.a 00:01:22.624 [392/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:22.624 [393/745] Linking static target lib/acl/libavx512_tmp.a 00:01:22.624 [394/745] Linking static target lib/librte_acl.a 00:01:22.624 [395/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:22.890 [396/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:22.890 [397/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:22.890 [398/745] Generating lib/rte_table_def with a custom command 00:01:22.890 [399/745] Generating lib/rte_table_mingw with a custom command 00:01:23.154 [400/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.154 [401/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.154 [402/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:23.420 [403/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.420 [404/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:23.420 [405/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:23.420 [406/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:23.420 [407/745] Linking static target lib/librte_ethdev.a 00:01:23.420 [408/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:23.420 [409/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:23.420 [410/745] Generating lib/rte_pipeline_def with a custom command 00:01:23.420 [411/745] Linking static target lib/librte_mbuf.a 00:01:23.420 [412/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:23.420 [413/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:23.420 [414/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:23.420 [415/745] Generating lib/rte_pipeline_mingw with a custom command 00:01:23.679 [416/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:23.679 [417/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:23.679 [418/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:23.679 [419/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:23.679 [420/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:23.679 [421/745] Linking static target lib/librte_fib.a 00:01:23.679 [422/745] Generating lib/rte_graph_def with a custom command 00:01:23.679 [423/745] Generating lib/rte_graph_mingw with a custom command 00:01:23.679 [424/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:23.943 [425/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:23.943 [426/745] Linking static target lib/librte_eventdev.a 00:01:23.943 [427/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:23.943 [428/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:23.943 [429/745] Linking static target lib/librte_member.a 00:01:23.943 [430/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:23.943 [431/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:23.943 [432/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:23.943 [433/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:23.943 [434/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:23.943 [435/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:23.943 [436/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:23.943 [437/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.205 [438/745] Generating lib/rte_node_def with a custom command 00:01:24.205 [439/745] Generating lib/rte_node_mingw with a custom command 00:01:24.205 [440/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.205 [441/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:24.205 [442/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:24.470 [443/745] Generating drivers/rte_bus_pci_def with a custom command 00:01:24.470 [444/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:24.470 [445/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:24.470 [446/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.470 [447/745] Linking static target lib/librte_sched.a 00:01:24.470 [448/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.470 [449/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:01:24.470 [450/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:24.470 [451/745] Generating drivers/rte_bus_vdev_def with a custom command 00:01:24.470 [452/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:24.470 [453/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:01:24.470 [454/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:24.470 [455/745] Generating drivers/rte_mempool_ring_def with a custom command 00:01:24.470 [456/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:24.470 [457/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:01:24.470 [458/745] Linking static target lib/librte_cryptodev.a 00:01:24.470 [459/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:24.470 [460/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:24.732 [461/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:24.733 [462/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:24.733 [463/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:24.733 [464/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:24.733 [465/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:24.733 [466/745] Linking static target lib/librte_pdump.a 00:01:24.733 [467/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:24.733 [468/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:24.733 [469/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:24.994 [470/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:24.994 [471/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:24.994 [472/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:24.994 [473/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:24.994 [474/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:24.994 [475/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:24.994 [476/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:24.994 [477/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:24.994 [478/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.258 [479/745] Generating drivers/rte_net_i40e_def with a custom command 00:01:25.258 [480/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:25.258 [481/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:25.258 [482/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:01:25.258 [483/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:25.258 [484/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:25.258 [485/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.258 [486/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:25.258 [487/745] Linking static target drivers/librte_bus_vdev.a 00:01:25.258 [488/745] Linking static target lib/librte_table.a 00:01:25.258 [489/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:25.516 [490/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:25.516 [491/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:25.516 [492/745] Linking static target lib/librte_ipsec.a 00:01:25.516 [493/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:25.517 [494/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:25.777 [495/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.777 [496/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:25.777 [497/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:25.777 [498/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:25.777 [499/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:25.777 [500/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:26.040 [501/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:26.040 [502/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:26.040 [503/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:26.040 [504/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:26.040 [505/745] Linking static target drivers/librte_bus_pci.a 00:01:26.040 [506/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:26.040 [507/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:26.040 [508/745] Linking static target lib/librte_graph.a 00:01:26.040 [509/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:26.040 [510/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:26.040 [511/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.305 [512/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:26.305 [513/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:26.569 [514/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.569 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:26.569 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.831 [517/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:26.831 [518/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:26.832 [519/745] Linking static target lib/librte_port.a 00:01:26.832 [520/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.832 [521/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:26.832 [522/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:27.092 [523/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:27.092 [524/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:27.092 [525/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:27.092 [526/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:27.355 [527/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:27.355 [528/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:27.355 [529/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.621 [530/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:27.621 [531/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:27.621 [532/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:27.621 [533/745] Linking static target drivers/librte_mempool_ring.a 00:01:27.621 [534/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:27.621 [535/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:27.621 [536/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:27.621 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:27.881 [538/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:27.881 [539/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.881 [540/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:28.144 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.144 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:28.144 [543/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:28.406 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:28.406 [545/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:28.406 [546/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:28.406 [547/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:28.406 [548/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:28.665 [549/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:28.665 [550/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:28.665 [551/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:28.930 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:28.930 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:29.193 [554/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:29.193 [555/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:29.193 [556/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:29.193 [557/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:29.454 [558/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:29.454 [559/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:29.733 [560/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:29.733 [561/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:29.733 [562/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:29.733 [563/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:29.733 [564/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:29.733 [565/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:30.007 [566/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:30.007 [567/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:30.007 [568/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:30.007 [569/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:30.007 [570/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:30.007 [571/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:30.268 [572/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:30.269 [573/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:30.269 [574/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:30.532 [575/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:30.532 [576/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:30.532 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:30.532 [578/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:30.532 [579/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:30.532 [580/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:30.532 [581/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:30.793 [582/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:30.793 [583/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:30.793 [584/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:31.054 [585/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:31.054 [586/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.316 [587/745] Linking target lib/librte_eal.so.23.0 00:01:31.316 [588/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:31.316 [589/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:31.316 [590/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:01:31.581 [591/745] Linking target lib/librte_ring.so.23.0 00:01:31.581 [592/745] Linking target lib/librte_meter.so.23.0 00:01:31.581 [593/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:31.581 [594/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:31.581 [595/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:31.581 [596/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:31.581 [597/745] Linking target lib/librte_timer.so.23.0 00:01:31.581 [598/745] Linking target lib/librte_pci.so.23.0 00:01:31.581 [599/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.581 [600/745] Linking target lib/librte_acl.so.23.0 00:01:31.581 [601/745] Linking target lib/librte_cfgfile.so.23.0 00:01:31.581 [602/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:31.839 [603/745] Linking target lib/librte_jobstats.so.23.0 00:01:31.839 [604/745] Linking target lib/librte_rawdev.so.23.0 00:01:31.839 [605/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:01:31.839 [606/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:01:31.839 [607/745] Linking target lib/librte_dmadev.so.23.0 00:01:31.839 [608/745] Linking target lib/librte_stack.so.23.0 00:01:31.839 [609/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:01:31.839 [610/745] Linking target lib/librte_rcu.so.23.0 00:01:31.839 [611/745] Linking target lib/librte_mempool.so.23.0 00:01:31.839 [612/745] Linking target lib/librte_graph.so.23.0 00:01:31.839 [613/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:01:31.839 [614/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:31.839 [615/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:31.839 [616/745] Linking target drivers/librte_bus_vdev.so.23.0 00:01:31.839 [617/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:01:31.839 [618/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:31.839 [619/745] Linking target drivers/librte_bus_pci.so.23.0 00:01:31.839 [620/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:31.839 [621/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:31.839 [622/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:01:32.103 [623/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:01:32.103 [624/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:32.103 [625/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:01:32.103 [626/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:01:32.103 [627/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:32.103 [628/745] Linking target drivers/librte_mempool_ring.so.23.0 00:01:32.103 [629/745] Linking target lib/librte_rib.so.23.0 00:01:32.103 [630/745] Linking target lib/librte_mbuf.so.23.0 00:01:32.103 [631/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:32.103 [632/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:01:32.103 [633/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:01:32.103 [634/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:32.103 [635/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:01:32.362 [636/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:01:32.362 [637/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:32.362 [638/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:32.362 [639/745] Linking target lib/librte_distributor.so.23.0 00:01:32.362 [640/745] Linking target lib/librte_compressdev.so.23.0 00:01:32.362 [641/745] Linking target lib/librte_bbdev.so.23.0 00:01:32.362 [642/745] Linking target lib/librte_gpudev.so.23.0 00:01:32.362 [643/745] Linking target lib/librte_net.so.23.0 00:01:32.362 [644/745] Linking target lib/librte_reorder.so.23.0 00:01:32.362 [645/745] Linking target lib/librte_sched.so.23.0 00:01:32.362 [646/745] Linking target lib/librte_regexdev.so.23.0 00:01:32.362 [647/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:32.362 [648/745] Linking target lib/librte_cryptodev.so.23.0 00:01:32.362 [649/745] Linking target lib/librte_fib.so.23.0 00:01:32.362 [650/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:32.362 [651/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:32.362 [652/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:01:32.362 [653/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:01:32.362 [654/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:32.362 [655/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:32.362 [656/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:01:32.621 [657/745] Linking target lib/librte_cmdline.so.23.0 00:01:32.621 [658/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:32.621 [659/745] Linking target lib/librte_ethdev.so.23.0 00:01:32.621 [660/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:32.621 [661/745] Linking target lib/librte_security.so.23.0 00:01:32.621 [662/745] Linking target lib/librte_hash.so.23.0 00:01:32.621 [663/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:32.621 [664/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:01:32.621 [665/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:32.621 [666/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:01:32.621 [667/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:01:32.621 [668/745] Linking target lib/librte_gro.so.23.0 00:01:32.621 [669/745] Linking target lib/librte_bpf.so.23.0 00:01:32.621 [670/745] Linking target lib/librte_metrics.so.23.0 00:01:32.621 [671/745] Linking target lib/librte_lpm.so.23.0 00:01:32.621 [672/745] Linking target lib/librte_pcapng.so.23.0 00:01:32.621 [673/745] Linking target lib/librte_efd.so.23.0 00:01:32.621 [674/745] Linking target lib/librte_gso.so.23.0 00:01:32.621 [675/745] Linking target lib/librte_power.so.23.0 00:01:32.622 [676/745] Linking target lib/librte_ipsec.so.23.0 00:01:32.880 [677/745] Linking target lib/librte_member.so.23.0 00:01:32.880 [678/745] Linking target lib/librte_ip_frag.so.23.0 00:01:32.880 [679/745] Linking target lib/librte_eventdev.so.23.0 00:01:32.880 [680/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:01:32.880 [681/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:01:32.880 [682/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:01:32.880 [683/745] Linking target lib/librte_pdump.so.23.0 00:01:32.880 [684/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:01:32.880 [685/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:01:32.880 [686/745] Linking target lib/librte_bitratestats.so.23.0 00:01:32.880 [687/745] Linking target lib/librte_latencystats.so.23.0 00:01:32.880 [688/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:01:32.880 [689/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:33.138 [690/745] Linking target lib/librte_port.so.23.0 00:01:33.138 [691/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:33.138 [692/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:01:33.138 [693/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:33.138 [694/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:33.138 [695/745] Linking target lib/librte_table.so.23.0 00:01:33.138 [696/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:33.396 [697/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:01:33.396 [698/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:33.963 [699/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:33.963 [700/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:33.963 [701/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:33.963 [702/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:33.963 [703/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:34.221 [704/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:34.221 [705/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:34.221 [706/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:34.221 [707/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:34.221 [708/745] Linking static target drivers/librte_net_i40e.a 00:01:34.788 [709/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:34.788 [710/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:34.788 [711/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.788 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:01:36.161 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:36.161 [714/745] Linking static target lib/librte_node.a 00:01:36.419 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.419 [716/745] Linking target lib/librte_node.so.23.0 00:01:36.419 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:36.985 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:37.243 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:45.350 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:17.430 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:17.430 [722/745] Linking static target lib/librte_vhost.a 00:02:17.430 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.430 [724/745] Linking target lib/librte_vhost.so.23.0 00:02:29.624 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:29.624 [726/745] Linking static target lib/librte_pipeline.a 00:02:29.624 [727/745] Linking target app/dpdk-dumpcap 00:02:29.624 [728/745] Linking target app/dpdk-test-flow-perf 00:02:29.624 [729/745] Linking target app/dpdk-test-cmdline 00:02:29.624 [730/745] Linking target app/dpdk-test-pipeline 00:02:29.624 [731/745] Linking target app/dpdk-test-security-perf 00:02:29.624 [732/745] Linking target app/dpdk-pdump 00:02:29.624 [733/745] Linking target app/dpdk-proc-info 00:02:29.624 [734/745] Linking target app/dpdk-test-sad 00:02:29.624 [735/745] Linking target app/dpdk-test-acl 00:02:29.624 [736/745] Linking target app/dpdk-test-fib 00:02:29.624 [737/745] Linking target app/dpdk-test-gpudev 00:02:29.624 [738/745] Linking target app/dpdk-test-bbdev 00:02:29.624 [739/745] Linking target app/dpdk-test-eventdev 00:02:29.624 [740/745] Linking target app/dpdk-test-compress-perf 00:02:29.624 [741/745] Linking target app/dpdk-test-crypto-perf 00:02:29.624 [742/745] Linking target app/dpdk-test-regex 00:02:29.624 [743/745] Linking target app/dpdk-testpmd 00:02:31.018 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.275 [745/745] Linking target lib/librte_pipeline.so.23.0 00:02:31.275 07:49:22 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:02:31.275 07:49:22 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:31.275 07:49:22 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:31.275 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:31.275 [0/1] Installing files. 00:02:31.536 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.536 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:31.537 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:31.538 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.539 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:31.540 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:31.541 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:31.541 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.541 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.541 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.541 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.541 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.541 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.541 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.541 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.541 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.541 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.542 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.800 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.800 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.800 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.800 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.800 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.800 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.800 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.800 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.800 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.801 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:32.063 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:32.063 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:32.063 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:32.063 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:32.063 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:32.063 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:32.063 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:32.063 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:32.063 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:32.063 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:32.063 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:32.063 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:32.063 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:32.063 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:32.063 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:32.063 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:32.063 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:32.063 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:32.063 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:32.063 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:32.063 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:32.063 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:32.063 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:32.063 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:32.063 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:32.063 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:32.063 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:32.063 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:32.063 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:32.063 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:32.063 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:32.063 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:32.063 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:32.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:32.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:32.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:32.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:32.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:32.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:32.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:32.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:32.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:32.063 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.064 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.065 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.066 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:32.067 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:32.067 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:32.067 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:32.067 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:32.067 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:32.067 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:32.067 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:32.067 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:32.067 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:32.067 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:32.067 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:32.067 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:32.067 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:32.067 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:32.067 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:32.067 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:32.067 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:32.067 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:32.067 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:32.067 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:32.067 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:32.067 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:32.067 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:32.067 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:32.067 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:32.067 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:32.067 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:32.067 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:32.067 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:32.067 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:32.067 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:32.067 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:32.067 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:32.067 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:32.067 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:32.067 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:32.067 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:32.067 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:32.067 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:32.067 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:32.067 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:32.067 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:32.067 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:32.067 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:32.067 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:32.067 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:32.067 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:32.067 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:32.067 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:32.067 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:32.068 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:32.068 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:32.068 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:32.068 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:32.068 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:32.068 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:32.068 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:32.068 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:32.068 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:32.068 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:32.068 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:32.068 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:32.068 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:32.068 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:32.068 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:32.068 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:32.068 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:32.068 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:32.068 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:32.068 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:32.068 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:32.068 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:32.068 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:32.068 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:32.068 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:32.068 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:32.068 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:32.068 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:32.068 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:32.068 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:32.068 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:32.068 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:32.068 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:32.068 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:32.068 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:32.068 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:32.068 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:32.068 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:32.068 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:32.068 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:32.068 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:32.068 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:32.068 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:32.068 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:32.068 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:32.068 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:32.068 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:32.068 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:32.068 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:32.068 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:32.068 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:32.068 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:32.068 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:32.068 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:32.068 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:32.068 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:32.068 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:32.068 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:32.068 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:32.327 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:32.327 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:32.327 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:32.327 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:32.327 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:32.327 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:32.327 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:32.327 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:32.327 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:32.327 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:32.327 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:32.327 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:32.327 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:32.327 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:32.327 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:32.327 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:32.327 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:32.327 07:49:23 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:02:32.327 07:49:23 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.327 00:02:32.327 real 1m22.472s 00:02:32.327 user 14m22.720s 00:02:32.327 sys 1m48.785s 00:02:32.327 07:49:23 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:32.327 07:49:23 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:32.327 ************************************ 00:02:32.327 END TEST build_native_dpdk 00:02:32.327 ************************************ 00:02:32.327 07:49:23 -- common/autotest_common.sh@1142 -- $ return 0 00:02:32.327 07:49:23 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:32.327 07:49:23 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:32.327 07:49:23 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:32.327 07:49:23 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:32.327 07:49:23 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:32.327 07:49:23 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:32.327 07:49:23 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:32.327 07:49:23 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:32.327 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:32.327 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:32.327 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:32.585 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:32.843 Using 'verbs' RDMA provider 00:02:43.416 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:51.546 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:51.546 Creating mk/config.mk...done. 00:02:51.546 Creating mk/cc.flags.mk...done. 00:02:51.546 Type 'make' to build. 00:02:51.546 07:49:43 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:51.546 07:49:43 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:02:51.546 07:49:43 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:51.546 07:49:43 -- common/autotest_common.sh@10 -- $ set +x 00:02:51.546 ************************************ 00:02:51.546 START TEST make 00:02:51.546 ************************************ 00:02:51.546 07:49:43 make -- common/autotest_common.sh@1123 -- $ make -j48 00:02:51.809 make[1]: Nothing to be done for 'all'. 00:02:53.209 The Meson build system 00:02:53.209 Version: 1.3.1 00:02:53.209 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:53.209 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:53.209 Build type: native build 00:02:53.209 Project name: libvfio-user 00:02:53.209 Project version: 0.0.1 00:02:53.210 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:53.210 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:53.210 Host machine cpu family: x86_64 00:02:53.210 Host machine cpu: x86_64 00:02:53.210 Run-time dependency threads found: YES 00:02:53.210 Library dl found: YES 00:02:53.210 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:53.210 Run-time dependency json-c found: YES 0.17 00:02:53.210 Run-time dependency cmocka found: YES 1.1.7 00:02:53.210 Program pytest-3 found: NO 00:02:53.210 Program flake8 found: NO 00:02:53.210 Program misspell-fixer found: NO 00:02:53.210 Program restructuredtext-lint found: NO 00:02:53.210 Program valgrind found: YES (/usr/bin/valgrind) 00:02:53.210 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:53.210 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:53.210 Compiler for C supports arguments -Wwrite-strings: YES 00:02:53.210 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:53.210 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:53.210 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:53.210 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:53.210 Build targets in project: 8 00:02:53.210 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:53.210 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:53.210 00:02:53.210 libvfio-user 0.0.1 00:02:53.210 00:02:53.210 User defined options 00:02:53.210 buildtype : debug 00:02:53.210 default_library: shared 00:02:53.210 libdir : /usr/local/lib 00:02:53.210 00:02:53.210 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:54.158 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:54.158 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:54.419 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:54.419 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:54.419 [4/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:54.419 [5/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:54.419 [6/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:54.419 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:54.419 [8/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:54.419 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:54.419 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:54.419 [11/37] Compiling C object samples/null.p/null.c.o 00:02:54.419 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:54.419 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:54.419 [14/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:54.419 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:54.419 [16/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:54.419 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:54.419 [18/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:54.419 [19/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:54.419 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:54.419 [21/37] Compiling C object samples/server.p/server.c.o 00:02:54.419 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:54.419 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:54.419 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:54.419 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:54.419 [26/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:54.419 [27/37] Compiling C object samples/client.p/client.c.o 00:02:54.682 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:02:54.682 [29/37] Linking target samples/client 00:02:54.682 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:54.682 [31/37] Linking target test/unit_tests 00:02:54.682 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:54.943 [33/37] Linking target samples/server 00:02:54.943 [34/37] Linking target samples/gpio-pci-idio-16 00:02:54.943 [35/37] Linking target samples/null 00:02:54.943 [36/37] Linking target samples/lspci 00:02:54.943 [37/37] Linking target samples/shadow_ioeventfd_server 00:02:54.943 INFO: autodetecting backend as ninja 00:02:54.943 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:54.943 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:55.521 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:55.521 ninja: no work to do. 00:03:07.725 CC lib/ut_mock/mock.o 00:03:07.725 CC lib/ut/ut.o 00:03:07.725 CC lib/log/log.o 00:03:07.725 CC lib/log/log_flags.o 00:03:07.725 CC lib/log/log_deprecated.o 00:03:07.725 LIB libspdk_ut_mock.a 00:03:07.725 LIB libspdk_ut.a 00:03:07.725 LIB libspdk_log.a 00:03:07.725 SO libspdk_ut_mock.so.6.0 00:03:07.725 SO libspdk_ut.so.2.0 00:03:07.725 SO libspdk_log.so.7.0 00:03:07.725 SYMLINK libspdk_ut_mock.so 00:03:07.725 SYMLINK libspdk_ut.so 00:03:07.725 SYMLINK libspdk_log.so 00:03:07.725 CC lib/dma/dma.o 00:03:07.725 CXX lib/trace_parser/trace.o 00:03:07.725 CC lib/ioat/ioat.o 00:03:07.725 CC lib/util/base64.o 00:03:07.725 CC lib/util/bit_array.o 00:03:07.725 CC lib/util/cpuset.o 00:03:07.725 CC lib/util/crc16.o 00:03:07.725 CC lib/util/crc32.o 00:03:07.725 CC lib/util/crc32c.o 00:03:07.725 CC lib/util/crc32_ieee.o 00:03:07.725 CC lib/util/crc64.o 00:03:07.725 CC lib/util/dif.o 00:03:07.725 CC lib/util/fd.o 00:03:07.725 CC lib/util/file.o 00:03:07.725 CC lib/util/hexlify.o 00:03:07.725 CC lib/util/iov.o 00:03:07.725 CC lib/util/math.o 00:03:07.725 CC lib/util/pipe.o 00:03:07.725 CC lib/util/strerror_tls.o 00:03:07.725 CC lib/util/string.o 00:03:07.725 CC lib/util/uuid.o 00:03:07.725 CC lib/util/fd_group.o 00:03:07.725 CC lib/util/xor.o 00:03:07.725 CC lib/util/zipf.o 00:03:07.725 CC lib/vfio_user/host/vfio_user_pci.o 00:03:07.725 CC lib/vfio_user/host/vfio_user.o 00:03:07.725 LIB libspdk_dma.a 00:03:07.725 SO libspdk_dma.so.4.0 00:03:07.725 SYMLINK libspdk_dma.so 00:03:07.725 LIB libspdk_ioat.a 00:03:07.725 SO libspdk_ioat.so.7.0 00:03:07.725 SYMLINK libspdk_ioat.so 00:03:07.725 LIB libspdk_vfio_user.a 00:03:07.725 SO libspdk_vfio_user.so.5.0 00:03:07.984 SYMLINK libspdk_vfio_user.so 00:03:07.984 LIB libspdk_util.a 00:03:07.984 SO libspdk_util.so.9.1 00:03:08.242 SYMLINK libspdk_util.so 00:03:08.242 CC lib/vmd/vmd.o 00:03:08.242 CC lib/idxd/idxd.o 00:03:08.242 CC lib/rdma_provider/common.o 00:03:08.242 CC lib/conf/conf.o 00:03:08.242 CC lib/rdma_utils/rdma_utils.o 00:03:08.242 CC lib/env_dpdk/env.o 00:03:08.242 CC lib/idxd/idxd_user.o 00:03:08.242 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:08.242 CC lib/vmd/led.o 00:03:08.242 CC lib/json/json_parse.o 00:03:08.242 CC lib/env_dpdk/memory.o 00:03:08.242 CC lib/idxd/idxd_kernel.o 00:03:08.242 CC lib/json/json_util.o 00:03:08.242 CC lib/env_dpdk/pci.o 00:03:08.242 CC lib/json/json_write.o 00:03:08.242 CC lib/env_dpdk/init.o 00:03:08.242 CC lib/env_dpdk/threads.o 00:03:08.242 CC lib/env_dpdk/pci_ioat.o 00:03:08.242 CC lib/env_dpdk/pci_virtio.o 00:03:08.242 CC lib/env_dpdk/pci_vmd.o 00:03:08.242 CC lib/env_dpdk/pci_idxd.o 00:03:08.242 CC lib/env_dpdk/pci_event.o 00:03:08.242 CC lib/env_dpdk/sigbus_handler.o 00:03:08.242 CC lib/env_dpdk/pci_dpdk.o 00:03:08.242 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:08.242 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:08.242 LIB libspdk_trace_parser.a 00:03:08.535 SO libspdk_trace_parser.so.5.0 00:03:08.535 SYMLINK libspdk_trace_parser.so 00:03:08.535 LIB libspdk_rdma_provider.a 00:03:08.535 SO libspdk_rdma_provider.so.6.0 00:03:08.535 LIB libspdk_conf.a 00:03:08.535 SO libspdk_conf.so.6.0 00:03:08.793 SYMLINK libspdk_rdma_provider.so 00:03:08.793 LIB libspdk_rdma_utils.a 00:03:08.793 SYMLINK libspdk_conf.so 00:03:08.793 SO libspdk_rdma_utils.so.1.0 00:03:08.793 SYMLINK libspdk_rdma_utils.so 00:03:08.793 LIB libspdk_json.a 00:03:08.793 SO libspdk_json.so.6.0 00:03:08.793 SYMLINK libspdk_json.so 00:03:08.793 LIB libspdk_idxd.a 00:03:09.051 SO libspdk_idxd.so.12.0 00:03:09.051 CC lib/jsonrpc/jsonrpc_server.o 00:03:09.051 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:09.051 CC lib/jsonrpc/jsonrpc_client.o 00:03:09.051 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:09.051 SYMLINK libspdk_idxd.so 00:03:09.051 LIB libspdk_vmd.a 00:03:09.051 SO libspdk_vmd.so.6.0 00:03:09.051 SYMLINK libspdk_vmd.so 00:03:09.310 LIB libspdk_jsonrpc.a 00:03:09.310 SO libspdk_jsonrpc.so.6.0 00:03:09.310 SYMLINK libspdk_jsonrpc.so 00:03:09.567 CC lib/rpc/rpc.o 00:03:09.825 LIB libspdk_rpc.a 00:03:09.825 SO libspdk_rpc.so.6.0 00:03:09.825 SYMLINK libspdk_rpc.so 00:03:10.082 CC lib/notify/notify.o 00:03:10.082 CC lib/notify/notify_rpc.o 00:03:10.082 CC lib/trace/trace.o 00:03:10.082 CC lib/trace/trace_flags.o 00:03:10.082 CC lib/trace/trace_rpc.o 00:03:10.082 CC lib/keyring/keyring.o 00:03:10.082 CC lib/keyring/keyring_rpc.o 00:03:10.082 LIB libspdk_notify.a 00:03:10.082 SO libspdk_notify.so.6.0 00:03:10.340 SYMLINK libspdk_notify.so 00:03:10.340 LIB libspdk_keyring.a 00:03:10.340 LIB libspdk_trace.a 00:03:10.340 SO libspdk_keyring.so.1.0 00:03:10.340 SO libspdk_trace.so.10.0 00:03:10.340 SYMLINK libspdk_keyring.so 00:03:10.340 SYMLINK libspdk_trace.so 00:03:10.598 CC lib/sock/sock.o 00:03:10.598 CC lib/sock/sock_rpc.o 00:03:10.598 CC lib/thread/thread.o 00:03:10.598 CC lib/thread/iobuf.o 00:03:10.598 LIB libspdk_env_dpdk.a 00:03:10.598 SO libspdk_env_dpdk.so.14.1 00:03:10.598 SYMLINK libspdk_env_dpdk.so 00:03:10.856 LIB libspdk_sock.a 00:03:10.856 SO libspdk_sock.so.10.0 00:03:10.856 SYMLINK libspdk_sock.so 00:03:11.114 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:11.114 CC lib/nvme/nvme_ctrlr.o 00:03:11.114 CC lib/nvme/nvme_fabric.o 00:03:11.114 CC lib/nvme/nvme_ns_cmd.o 00:03:11.114 CC lib/nvme/nvme_ns.o 00:03:11.114 CC lib/nvme/nvme_pcie_common.o 00:03:11.114 CC lib/nvme/nvme_pcie.o 00:03:11.114 CC lib/nvme/nvme_qpair.o 00:03:11.114 CC lib/nvme/nvme.o 00:03:11.114 CC lib/nvme/nvme_quirks.o 00:03:11.114 CC lib/nvme/nvme_transport.o 00:03:11.114 CC lib/nvme/nvme_discovery.o 00:03:11.114 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:11.114 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:11.114 CC lib/nvme/nvme_tcp.o 00:03:11.114 CC lib/nvme/nvme_opal.o 00:03:11.114 CC lib/nvme/nvme_io_msg.o 00:03:11.114 CC lib/nvme/nvme_poll_group.o 00:03:11.114 CC lib/nvme/nvme_zns.o 00:03:11.114 CC lib/nvme/nvme_stubs.o 00:03:11.114 CC lib/nvme/nvme_auth.o 00:03:11.114 CC lib/nvme/nvme_cuse.o 00:03:11.114 CC lib/nvme/nvme_vfio_user.o 00:03:11.114 CC lib/nvme/nvme_rdma.o 00:03:12.045 LIB libspdk_thread.a 00:03:12.045 SO libspdk_thread.so.10.1 00:03:12.045 SYMLINK libspdk_thread.so 00:03:12.302 CC lib/init/json_config.o 00:03:12.302 CC lib/virtio/virtio.o 00:03:12.302 CC lib/blob/blobstore.o 00:03:12.302 CC lib/blob/request.o 00:03:12.302 CC lib/init/subsystem.o 00:03:12.302 CC lib/virtio/virtio_vhost_user.o 00:03:12.302 CC lib/blob/zeroes.o 00:03:12.302 CC lib/vfu_tgt/tgt_endpoint.o 00:03:12.302 CC lib/virtio/virtio_vfio_user.o 00:03:12.302 CC lib/init/subsystem_rpc.o 00:03:12.302 CC lib/accel/accel.o 00:03:12.302 CC lib/vfu_tgt/tgt_rpc.o 00:03:12.302 CC lib/init/rpc.o 00:03:12.302 CC lib/blob/blob_bs_dev.o 00:03:12.302 CC lib/accel/accel_rpc.o 00:03:12.302 CC lib/virtio/virtio_pci.o 00:03:12.302 CC lib/accel/accel_sw.o 00:03:12.559 LIB libspdk_init.a 00:03:12.559 SO libspdk_init.so.5.0 00:03:12.559 LIB libspdk_virtio.a 00:03:12.559 LIB libspdk_vfu_tgt.a 00:03:12.559 SYMLINK libspdk_init.so 00:03:12.817 SO libspdk_vfu_tgt.so.3.0 00:03:12.817 SO libspdk_virtio.so.7.0 00:03:12.817 SYMLINK libspdk_vfu_tgt.so 00:03:12.817 SYMLINK libspdk_virtio.so 00:03:12.817 CC lib/event/app.o 00:03:12.817 CC lib/event/reactor.o 00:03:12.817 CC lib/event/log_rpc.o 00:03:12.817 CC lib/event/app_rpc.o 00:03:12.817 CC lib/event/scheduler_static.o 00:03:13.383 LIB libspdk_event.a 00:03:13.383 SO libspdk_event.so.14.0 00:03:13.383 LIB libspdk_accel.a 00:03:13.383 SYMLINK libspdk_event.so 00:03:13.383 SO libspdk_accel.so.15.1 00:03:13.383 SYMLINK libspdk_accel.so 00:03:13.641 LIB libspdk_nvme.a 00:03:13.641 CC lib/bdev/bdev.o 00:03:13.641 CC lib/bdev/bdev_rpc.o 00:03:13.641 CC lib/bdev/bdev_zone.o 00:03:13.641 CC lib/bdev/part.o 00:03:13.641 CC lib/bdev/scsi_nvme.o 00:03:13.641 SO libspdk_nvme.so.13.1 00:03:13.898 SYMLINK libspdk_nvme.so 00:03:15.272 LIB libspdk_blob.a 00:03:15.531 SO libspdk_blob.so.11.0 00:03:15.531 SYMLINK libspdk_blob.so 00:03:15.789 CC lib/lvol/lvol.o 00:03:15.789 CC lib/blobfs/blobfs.o 00:03:15.789 CC lib/blobfs/tree.o 00:03:16.048 LIB libspdk_bdev.a 00:03:16.306 SO libspdk_bdev.so.15.1 00:03:16.306 SYMLINK libspdk_bdev.so 00:03:16.567 CC lib/scsi/dev.o 00:03:16.567 CC lib/scsi/lun.o 00:03:16.567 CC lib/nbd/nbd.o 00:03:16.567 CC lib/scsi/port.o 00:03:16.567 CC lib/ftl/ftl_core.o 00:03:16.567 CC lib/nbd/nbd_rpc.o 00:03:16.567 CC lib/scsi/scsi.o 00:03:16.567 CC lib/ftl/ftl_init.o 00:03:16.568 CC lib/nvmf/ctrlr.o 00:03:16.568 CC lib/scsi/scsi_bdev.o 00:03:16.568 CC lib/ftl/ftl_layout.o 00:03:16.568 CC lib/ublk/ublk.o 00:03:16.568 CC lib/nvmf/ctrlr_discovery.o 00:03:16.568 CC lib/ftl/ftl_debug.o 00:03:16.568 CC lib/nvmf/ctrlr_bdev.o 00:03:16.568 CC lib/scsi/scsi_pr.o 00:03:16.568 CC lib/ublk/ublk_rpc.o 00:03:16.568 CC lib/nvmf/subsystem.o 00:03:16.568 CC lib/ftl/ftl_io.o 00:03:16.568 CC lib/scsi/scsi_rpc.o 00:03:16.568 CC lib/ftl/ftl_sb.o 00:03:16.568 CC lib/scsi/task.o 00:03:16.568 CC lib/nvmf/nvmf.o 00:03:16.568 CC lib/ftl/ftl_l2p.o 00:03:16.568 CC lib/nvmf/nvmf_rpc.o 00:03:16.568 CC lib/ftl/ftl_l2p_flat.o 00:03:16.568 CC lib/nvmf/tcp.o 00:03:16.568 CC lib/nvmf/transport.o 00:03:16.568 CC lib/ftl/ftl_nv_cache.o 00:03:16.568 CC lib/nvmf/stubs.o 00:03:16.568 CC lib/ftl/ftl_band.o 00:03:16.568 CC lib/ftl/ftl_writer.o 00:03:16.568 CC lib/ftl/ftl_band_ops.o 00:03:16.568 CC lib/nvmf/mdns_server.o 00:03:16.568 CC lib/nvmf/vfio_user.o 00:03:16.568 CC lib/ftl/ftl_rq.o 00:03:16.568 CC lib/nvmf/rdma.o 00:03:16.568 CC lib/ftl/ftl_reloc.o 00:03:16.568 CC lib/ftl/ftl_l2p_cache.o 00:03:16.568 CC lib/nvmf/auth.o 00:03:16.568 CC lib/ftl/ftl_p2l.o 00:03:16.568 CC lib/ftl/mngt/ftl_mngt.o 00:03:16.568 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:16.568 LIB libspdk_blobfs.a 00:03:16.568 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:16.568 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:16.568 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:16.568 SO libspdk_blobfs.so.10.0 00:03:16.568 SYMLINK libspdk_blobfs.so 00:03:16.568 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:16.568 LIB libspdk_lvol.a 00:03:16.830 SO libspdk_lvol.so.10.0 00:03:16.830 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:16.830 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:16.830 SYMLINK libspdk_lvol.so 00:03:16.830 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:16.830 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:16.830 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:16.830 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:16.830 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:16.830 CC lib/ftl/utils/ftl_conf.o 00:03:16.830 CC lib/ftl/utils/ftl_md.o 00:03:16.830 CC lib/ftl/utils/ftl_mempool.o 00:03:16.830 CC lib/ftl/utils/ftl_bitmap.o 00:03:16.830 CC lib/ftl/utils/ftl_property.o 00:03:16.830 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:16.830 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:16.830 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:17.092 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:17.092 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:17.092 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:17.092 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:17.092 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:17.092 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:17.092 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:17.092 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:17.092 CC lib/ftl/base/ftl_base_dev.o 00:03:17.092 CC lib/ftl/base/ftl_base_bdev.o 00:03:17.351 CC lib/ftl/ftl_trace.o 00:03:17.351 LIB libspdk_nbd.a 00:03:17.351 SO libspdk_nbd.so.7.0 00:03:17.351 SYMLINK libspdk_nbd.so 00:03:17.351 LIB libspdk_scsi.a 00:03:17.351 SO libspdk_scsi.so.9.0 00:03:17.609 LIB libspdk_ublk.a 00:03:17.609 SYMLINK libspdk_scsi.so 00:03:17.609 SO libspdk_ublk.so.3.0 00:03:17.609 SYMLINK libspdk_ublk.so 00:03:17.610 CC lib/iscsi/conn.o 00:03:17.610 CC lib/vhost/vhost.o 00:03:17.610 CC lib/iscsi/init_grp.o 00:03:17.610 CC lib/vhost/vhost_rpc.o 00:03:17.610 CC lib/vhost/vhost_scsi.o 00:03:17.610 CC lib/iscsi/iscsi.o 00:03:17.610 CC lib/iscsi/md5.o 00:03:17.610 CC lib/vhost/vhost_blk.o 00:03:17.610 CC lib/vhost/rte_vhost_user.o 00:03:17.610 CC lib/iscsi/param.o 00:03:17.610 CC lib/iscsi/portal_grp.o 00:03:17.610 CC lib/iscsi/tgt_node.o 00:03:17.610 CC lib/iscsi/iscsi_subsystem.o 00:03:17.610 CC lib/iscsi/iscsi_rpc.o 00:03:17.610 CC lib/iscsi/task.o 00:03:17.868 LIB libspdk_ftl.a 00:03:18.126 SO libspdk_ftl.so.9.0 00:03:18.384 SYMLINK libspdk_ftl.so 00:03:18.950 LIB libspdk_vhost.a 00:03:18.950 SO libspdk_vhost.so.8.0 00:03:18.950 LIB libspdk_nvmf.a 00:03:19.208 SYMLINK libspdk_vhost.so 00:03:19.208 SO libspdk_nvmf.so.18.1 00:03:19.208 LIB libspdk_iscsi.a 00:03:19.208 SO libspdk_iscsi.so.8.0 00:03:19.466 SYMLINK libspdk_nvmf.so 00:03:19.466 SYMLINK libspdk_iscsi.so 00:03:19.725 CC module/env_dpdk/env_dpdk_rpc.o 00:03:19.725 CC module/vfu_device/vfu_virtio.o 00:03:19.725 CC module/vfu_device/vfu_virtio_blk.o 00:03:19.725 CC module/vfu_device/vfu_virtio_scsi.o 00:03:19.725 CC module/vfu_device/vfu_virtio_rpc.o 00:03:19.725 CC module/accel/ioat/accel_ioat.o 00:03:19.725 CC module/accel/ioat/accel_ioat_rpc.o 00:03:19.725 CC module/blob/bdev/blob_bdev.o 00:03:19.725 CC module/scheduler/gscheduler/gscheduler.o 00:03:19.725 CC module/accel/iaa/accel_iaa.o 00:03:19.725 CC module/accel/dsa/accel_dsa.o 00:03:19.725 CC module/accel/iaa/accel_iaa_rpc.o 00:03:19.725 CC module/sock/posix/posix.o 00:03:19.725 CC module/accel/error/accel_error.o 00:03:19.725 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:19.726 CC module/accel/dsa/accel_dsa_rpc.o 00:03:19.726 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:19.726 CC module/accel/error/accel_error_rpc.o 00:03:19.726 CC module/keyring/file/keyring.o 00:03:19.726 CC module/keyring/linux/keyring.o 00:03:19.726 CC module/keyring/file/keyring_rpc.o 00:03:19.726 CC module/keyring/linux/keyring_rpc.o 00:03:19.726 LIB libspdk_env_dpdk_rpc.a 00:03:19.726 SO libspdk_env_dpdk_rpc.so.6.0 00:03:19.984 SYMLINK libspdk_env_dpdk_rpc.so 00:03:19.984 LIB libspdk_keyring_linux.a 00:03:19.984 LIB libspdk_scheduler_gscheduler.a 00:03:19.984 LIB libspdk_keyring_file.a 00:03:19.984 LIB libspdk_scheduler_dpdk_governor.a 00:03:19.984 SO libspdk_scheduler_gscheduler.so.4.0 00:03:19.984 SO libspdk_keyring_linux.so.1.0 00:03:19.984 SO libspdk_keyring_file.so.1.0 00:03:19.984 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:19.984 LIB libspdk_accel_error.a 00:03:19.984 LIB libspdk_accel_ioat.a 00:03:19.984 LIB libspdk_scheduler_dynamic.a 00:03:19.984 LIB libspdk_accel_iaa.a 00:03:19.984 SO libspdk_accel_ioat.so.6.0 00:03:19.984 SO libspdk_accel_error.so.2.0 00:03:19.984 SYMLINK libspdk_scheduler_gscheduler.so 00:03:19.984 SO libspdk_scheduler_dynamic.so.4.0 00:03:19.984 SYMLINK libspdk_keyring_linux.so 00:03:19.984 SYMLINK libspdk_keyring_file.so 00:03:19.984 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:19.984 SO libspdk_accel_iaa.so.3.0 00:03:19.984 LIB libspdk_accel_dsa.a 00:03:19.984 SYMLINK libspdk_accel_ioat.so 00:03:19.984 SYMLINK libspdk_accel_error.so 00:03:19.984 SYMLINK libspdk_scheduler_dynamic.so 00:03:19.984 LIB libspdk_blob_bdev.a 00:03:19.984 SYMLINK libspdk_accel_iaa.so 00:03:19.984 SO libspdk_accel_dsa.so.5.0 00:03:19.984 SO libspdk_blob_bdev.so.11.0 00:03:20.242 SYMLINK libspdk_blob_bdev.so 00:03:20.242 SYMLINK libspdk_accel_dsa.so 00:03:20.242 LIB libspdk_vfu_device.a 00:03:20.242 SO libspdk_vfu_device.so.3.0 00:03:20.511 CC module/bdev/gpt/gpt.o 00:03:20.511 CC module/bdev/gpt/vbdev_gpt.o 00:03:20.511 CC module/bdev/malloc/bdev_malloc.o 00:03:20.511 CC module/bdev/delay/vbdev_delay.o 00:03:20.511 CC module/bdev/lvol/vbdev_lvol.o 00:03:20.511 CC module/blobfs/bdev/blobfs_bdev.o 00:03:20.511 CC module/bdev/error/vbdev_error.o 00:03:20.511 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:20.511 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:20.511 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:20.511 CC module/bdev/error/vbdev_error_rpc.o 00:03:20.511 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:20.511 CC module/bdev/null/bdev_null.o 00:03:20.511 CC module/bdev/null/bdev_null_rpc.o 00:03:20.511 CC module/bdev/ftl/bdev_ftl.o 00:03:20.511 CC module/bdev/raid/bdev_raid.o 00:03:20.511 CC module/bdev/raid/bdev_raid_rpc.o 00:03:20.511 CC module/bdev/raid/bdev_raid_sb.o 00:03:20.511 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:20.511 CC module/bdev/iscsi/bdev_iscsi.o 00:03:20.511 CC module/bdev/passthru/vbdev_passthru.o 00:03:20.511 CC module/bdev/raid/raid0.o 00:03:20.511 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:20.511 CC module/bdev/aio/bdev_aio.o 00:03:20.511 CC module/bdev/aio/bdev_aio_rpc.o 00:03:20.511 CC module/bdev/raid/raid1.o 00:03:20.511 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:20.512 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:20.512 CC module/bdev/split/vbdev_split.o 00:03:20.512 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:20.512 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:20.512 CC module/bdev/raid/concat.o 00:03:20.512 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:20.512 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:20.512 CC module/bdev/split/vbdev_split_rpc.o 00:03:20.512 CC module/bdev/nvme/bdev_nvme.o 00:03:20.512 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:20.512 CC module/bdev/nvme/nvme_rpc.o 00:03:20.512 CC module/bdev/nvme/bdev_mdns_client.o 00:03:20.512 CC module/bdev/nvme/vbdev_opal.o 00:03:20.512 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:20.512 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:20.512 SYMLINK libspdk_vfu_device.so 00:03:20.783 LIB libspdk_sock_posix.a 00:03:20.783 SO libspdk_sock_posix.so.6.0 00:03:20.783 LIB libspdk_blobfs_bdev.a 00:03:20.783 LIB libspdk_bdev_null.a 00:03:20.783 SO libspdk_blobfs_bdev.so.6.0 00:03:20.783 SO libspdk_bdev_null.so.6.0 00:03:20.783 SYMLINK libspdk_sock_posix.so 00:03:20.783 LIB libspdk_bdev_ftl.a 00:03:20.783 SYMLINK libspdk_blobfs_bdev.so 00:03:20.783 SYMLINK libspdk_bdev_null.so 00:03:20.783 LIB libspdk_bdev_split.a 00:03:20.783 SO libspdk_bdev_ftl.so.6.0 00:03:20.783 SO libspdk_bdev_split.so.6.0 00:03:20.783 LIB libspdk_bdev_error.a 00:03:20.783 LIB libspdk_bdev_gpt.a 00:03:21.041 SYMLINK libspdk_bdev_ftl.so 00:03:21.041 SO libspdk_bdev_error.so.6.0 00:03:21.041 LIB libspdk_bdev_zone_block.a 00:03:21.041 SYMLINK libspdk_bdev_split.so 00:03:21.041 SO libspdk_bdev_gpt.so.6.0 00:03:21.042 SO libspdk_bdev_zone_block.so.6.0 00:03:21.042 LIB libspdk_bdev_passthru.a 00:03:21.042 LIB libspdk_bdev_aio.a 00:03:21.042 SYMLINK libspdk_bdev_error.so 00:03:21.042 LIB libspdk_bdev_malloc.a 00:03:21.042 SO libspdk_bdev_passthru.so.6.0 00:03:21.042 LIB libspdk_bdev_delay.a 00:03:21.042 SO libspdk_bdev_aio.so.6.0 00:03:21.042 SYMLINK libspdk_bdev_gpt.so 00:03:21.042 LIB libspdk_bdev_iscsi.a 00:03:21.042 SO libspdk_bdev_malloc.so.6.0 00:03:21.042 SYMLINK libspdk_bdev_zone_block.so 00:03:21.042 SO libspdk_bdev_delay.so.6.0 00:03:21.042 SO libspdk_bdev_iscsi.so.6.0 00:03:21.042 SYMLINK libspdk_bdev_passthru.so 00:03:21.042 SYMLINK libspdk_bdev_aio.so 00:03:21.042 SYMLINK libspdk_bdev_malloc.so 00:03:21.042 SYMLINK libspdk_bdev_delay.so 00:03:21.042 SYMLINK libspdk_bdev_iscsi.so 00:03:21.042 LIB libspdk_bdev_lvol.a 00:03:21.042 LIB libspdk_bdev_virtio.a 00:03:21.042 SO libspdk_bdev_lvol.so.6.0 00:03:21.300 SO libspdk_bdev_virtio.so.6.0 00:03:21.300 SYMLINK libspdk_bdev_lvol.so 00:03:21.300 SYMLINK libspdk_bdev_virtio.so 00:03:21.557 LIB libspdk_bdev_raid.a 00:03:21.557 SO libspdk_bdev_raid.so.6.0 00:03:21.557 SYMLINK libspdk_bdev_raid.so 00:03:22.932 LIB libspdk_bdev_nvme.a 00:03:22.932 SO libspdk_bdev_nvme.so.7.0 00:03:22.932 SYMLINK libspdk_bdev_nvme.so 00:03:23.190 CC module/event/subsystems/sock/sock.o 00:03:23.190 CC module/event/subsystems/iobuf/iobuf.o 00:03:23.190 CC module/event/subsystems/vmd/vmd.o 00:03:23.190 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:23.190 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:23.190 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:23.190 CC module/event/subsystems/scheduler/scheduler.o 00:03:23.190 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:23.190 CC module/event/subsystems/keyring/keyring.o 00:03:23.448 LIB libspdk_event_keyring.a 00:03:23.448 LIB libspdk_event_vhost_blk.a 00:03:23.448 LIB libspdk_event_vfu_tgt.a 00:03:23.448 LIB libspdk_event_scheduler.a 00:03:23.448 LIB libspdk_event_vmd.a 00:03:23.448 LIB libspdk_event_sock.a 00:03:23.448 SO libspdk_event_keyring.so.1.0 00:03:23.448 LIB libspdk_event_iobuf.a 00:03:23.448 SO libspdk_event_vhost_blk.so.3.0 00:03:23.448 SO libspdk_event_scheduler.so.4.0 00:03:23.448 SO libspdk_event_vfu_tgt.so.3.0 00:03:23.448 SO libspdk_event_vmd.so.6.0 00:03:23.448 SO libspdk_event_sock.so.5.0 00:03:23.448 SO libspdk_event_iobuf.so.3.0 00:03:23.448 SYMLINK libspdk_event_keyring.so 00:03:23.448 SYMLINK libspdk_event_vhost_blk.so 00:03:23.448 SYMLINK libspdk_event_scheduler.so 00:03:23.448 SYMLINK libspdk_event_vfu_tgt.so 00:03:23.448 SYMLINK libspdk_event_sock.so 00:03:23.448 SYMLINK libspdk_event_vmd.so 00:03:23.448 SYMLINK libspdk_event_iobuf.so 00:03:23.706 CC module/event/subsystems/accel/accel.o 00:03:23.964 LIB libspdk_event_accel.a 00:03:23.964 SO libspdk_event_accel.so.6.0 00:03:23.964 SYMLINK libspdk_event_accel.so 00:03:24.222 CC module/event/subsystems/bdev/bdev.o 00:03:24.222 LIB libspdk_event_bdev.a 00:03:24.222 SO libspdk_event_bdev.so.6.0 00:03:24.222 SYMLINK libspdk_event_bdev.so 00:03:24.479 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:24.479 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:24.479 CC module/event/subsystems/scsi/scsi.o 00:03:24.479 CC module/event/subsystems/ublk/ublk.o 00:03:24.479 CC module/event/subsystems/nbd/nbd.o 00:03:24.737 LIB libspdk_event_ublk.a 00:03:24.737 LIB libspdk_event_nbd.a 00:03:24.737 LIB libspdk_event_scsi.a 00:03:24.737 SO libspdk_event_nbd.so.6.0 00:03:24.737 SO libspdk_event_ublk.so.3.0 00:03:24.737 SO libspdk_event_scsi.so.6.0 00:03:24.737 SYMLINK libspdk_event_nbd.so 00:03:24.737 SYMLINK libspdk_event_ublk.so 00:03:24.737 SYMLINK libspdk_event_scsi.so 00:03:24.737 LIB libspdk_event_nvmf.a 00:03:24.737 SO libspdk_event_nvmf.so.6.0 00:03:24.737 SYMLINK libspdk_event_nvmf.so 00:03:24.994 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:24.994 CC module/event/subsystems/iscsi/iscsi.o 00:03:24.994 LIB libspdk_event_vhost_scsi.a 00:03:24.994 LIB libspdk_event_iscsi.a 00:03:24.994 SO libspdk_event_vhost_scsi.so.3.0 00:03:24.994 SO libspdk_event_iscsi.so.6.0 00:03:25.252 SYMLINK libspdk_event_vhost_scsi.so 00:03:25.252 SYMLINK libspdk_event_iscsi.so 00:03:25.252 SO libspdk.so.6.0 00:03:25.252 SYMLINK libspdk.so 00:03:25.517 TEST_HEADER include/spdk/accel.h 00:03:25.517 TEST_HEADER include/spdk/assert.h 00:03:25.517 TEST_HEADER include/spdk/accel_module.h 00:03:25.517 TEST_HEADER include/spdk/barrier.h 00:03:25.517 TEST_HEADER include/spdk/base64.h 00:03:25.517 TEST_HEADER include/spdk/bdev.h 00:03:25.517 TEST_HEADER include/spdk/bdev_module.h 00:03:25.517 TEST_HEADER include/spdk/bdev_zone.h 00:03:25.517 CC app/trace_record/trace_record.o 00:03:25.517 TEST_HEADER include/spdk/bit_array.h 00:03:25.517 CXX app/trace/trace.o 00:03:25.517 TEST_HEADER include/spdk/bit_pool.h 00:03:25.517 TEST_HEADER include/spdk/blob_bdev.h 00:03:25.517 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:25.517 CC test/rpc_client/rpc_client_test.o 00:03:25.517 TEST_HEADER include/spdk/blobfs.h 00:03:25.517 TEST_HEADER include/spdk/blob.h 00:03:25.517 CC app/spdk_nvme_perf/perf.o 00:03:25.517 TEST_HEADER include/spdk/conf.h 00:03:25.517 TEST_HEADER include/spdk/config.h 00:03:25.517 TEST_HEADER include/spdk/cpuset.h 00:03:25.517 CC app/spdk_lspci/spdk_lspci.o 00:03:25.517 CC app/spdk_nvme_identify/identify.o 00:03:25.517 TEST_HEADER include/spdk/crc16.h 00:03:25.517 TEST_HEADER include/spdk/crc64.h 00:03:25.517 TEST_HEADER include/spdk/crc32.h 00:03:25.517 CC app/spdk_nvme_discover/discovery_aer.o 00:03:25.517 TEST_HEADER include/spdk/dif.h 00:03:25.517 TEST_HEADER include/spdk/dma.h 00:03:25.517 CC app/spdk_top/spdk_top.o 00:03:25.517 TEST_HEADER include/spdk/endian.h 00:03:25.517 TEST_HEADER include/spdk/env_dpdk.h 00:03:25.517 TEST_HEADER include/spdk/env.h 00:03:25.517 TEST_HEADER include/spdk/event.h 00:03:25.517 TEST_HEADER include/spdk/fd_group.h 00:03:25.517 TEST_HEADER include/spdk/fd.h 00:03:25.517 TEST_HEADER include/spdk/file.h 00:03:25.518 TEST_HEADER include/spdk/ftl.h 00:03:25.518 TEST_HEADER include/spdk/hexlify.h 00:03:25.518 TEST_HEADER include/spdk/gpt_spec.h 00:03:25.518 TEST_HEADER include/spdk/histogram_data.h 00:03:25.518 TEST_HEADER include/spdk/idxd.h 00:03:25.518 TEST_HEADER include/spdk/idxd_spec.h 00:03:25.518 TEST_HEADER include/spdk/init.h 00:03:25.518 TEST_HEADER include/spdk/ioat.h 00:03:25.518 TEST_HEADER include/spdk/ioat_spec.h 00:03:25.518 TEST_HEADER include/spdk/iscsi_spec.h 00:03:25.518 TEST_HEADER include/spdk/json.h 00:03:25.518 TEST_HEADER include/spdk/keyring.h 00:03:25.518 TEST_HEADER include/spdk/jsonrpc.h 00:03:25.518 TEST_HEADER include/spdk/keyring_module.h 00:03:25.518 TEST_HEADER include/spdk/log.h 00:03:25.518 TEST_HEADER include/spdk/likely.h 00:03:25.518 TEST_HEADER include/spdk/lvol.h 00:03:25.518 TEST_HEADER include/spdk/memory.h 00:03:25.518 TEST_HEADER include/spdk/mmio.h 00:03:25.518 TEST_HEADER include/spdk/nbd.h 00:03:25.518 TEST_HEADER include/spdk/notify.h 00:03:25.518 TEST_HEADER include/spdk/nvme.h 00:03:25.518 TEST_HEADER include/spdk/nvme_intel.h 00:03:25.518 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:25.518 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:25.518 TEST_HEADER include/spdk/nvme_zns.h 00:03:25.518 TEST_HEADER include/spdk/nvme_spec.h 00:03:25.518 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:25.518 TEST_HEADER include/spdk/nvmf.h 00:03:25.518 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:25.518 TEST_HEADER include/spdk/nvmf_spec.h 00:03:25.518 TEST_HEADER include/spdk/opal.h 00:03:25.518 TEST_HEADER include/spdk/nvmf_transport.h 00:03:25.518 TEST_HEADER include/spdk/opal_spec.h 00:03:25.518 TEST_HEADER include/spdk/pci_ids.h 00:03:25.518 TEST_HEADER include/spdk/pipe.h 00:03:25.518 TEST_HEADER include/spdk/queue.h 00:03:25.518 TEST_HEADER include/spdk/reduce.h 00:03:25.518 TEST_HEADER include/spdk/rpc.h 00:03:25.518 TEST_HEADER include/spdk/scheduler.h 00:03:25.518 TEST_HEADER include/spdk/scsi.h 00:03:25.518 TEST_HEADER include/spdk/scsi_spec.h 00:03:25.518 TEST_HEADER include/spdk/sock.h 00:03:25.518 TEST_HEADER include/spdk/stdinc.h 00:03:25.518 TEST_HEADER include/spdk/string.h 00:03:25.518 TEST_HEADER include/spdk/thread.h 00:03:25.518 TEST_HEADER include/spdk/trace.h 00:03:25.518 TEST_HEADER include/spdk/trace_parser.h 00:03:25.518 TEST_HEADER include/spdk/ublk.h 00:03:25.518 TEST_HEADER include/spdk/tree.h 00:03:25.518 TEST_HEADER include/spdk/uuid.h 00:03:25.518 TEST_HEADER include/spdk/util.h 00:03:25.518 TEST_HEADER include/spdk/version.h 00:03:25.518 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:25.518 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:25.518 TEST_HEADER include/spdk/vhost.h 00:03:25.518 TEST_HEADER include/spdk/xor.h 00:03:25.518 TEST_HEADER include/spdk/vmd.h 00:03:25.518 TEST_HEADER include/spdk/zipf.h 00:03:25.518 CXX test/cpp_headers/accel.o 00:03:25.518 CXX test/cpp_headers/accel_module.o 00:03:25.518 CXX test/cpp_headers/assert.o 00:03:25.518 CXX test/cpp_headers/barrier.o 00:03:25.518 CXX test/cpp_headers/base64.o 00:03:25.518 CXX test/cpp_headers/bdev.o 00:03:25.518 CXX test/cpp_headers/bdev_module.o 00:03:25.518 CXX test/cpp_headers/bdev_zone.o 00:03:25.518 CXX test/cpp_headers/bit_array.o 00:03:25.518 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:25.518 CXX test/cpp_headers/bit_pool.o 00:03:25.518 CXX test/cpp_headers/blob_bdev.o 00:03:25.518 CXX test/cpp_headers/blobfs_bdev.o 00:03:25.518 CXX test/cpp_headers/blobfs.o 00:03:25.518 CXX test/cpp_headers/blob.o 00:03:25.518 CXX test/cpp_headers/conf.o 00:03:25.518 CXX test/cpp_headers/config.o 00:03:25.518 CXX test/cpp_headers/cpuset.o 00:03:25.518 CXX test/cpp_headers/crc16.o 00:03:25.518 CC app/spdk_dd/spdk_dd.o 00:03:25.518 CC app/nvmf_tgt/nvmf_main.o 00:03:25.518 CC app/iscsi_tgt/iscsi_tgt.o 00:03:25.518 CXX test/cpp_headers/crc32.o 00:03:25.518 CC examples/ioat/perf/perf.o 00:03:25.518 CC examples/ioat/verify/verify.o 00:03:25.518 CC test/env/vtophys/vtophys.o 00:03:25.518 CC test/app/histogram_perf/histogram_perf.o 00:03:25.518 CC test/app/jsoncat/jsoncat.o 00:03:25.518 CC test/env/pci/pci_ut.o 00:03:25.518 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:25.518 CC app/spdk_tgt/spdk_tgt.o 00:03:25.518 CC test/env/memory/memory_ut.o 00:03:25.518 CC test/thread/poller_perf/poller_perf.o 00:03:25.518 CC examples/util/zipf/zipf.o 00:03:25.518 CC app/fio/nvme/fio_plugin.o 00:03:25.518 CC test/app/stub/stub.o 00:03:25.777 CC test/dma/test_dma/test_dma.o 00:03:25.777 CC test/app/bdev_svc/bdev_svc.o 00:03:25.777 CC app/fio/bdev/fio_plugin.o 00:03:25.777 LINK spdk_lspci 00:03:25.777 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:25.777 CC test/env/mem_callbacks/mem_callbacks.o 00:03:25.777 LINK rpc_client_test 00:03:26.042 LINK jsoncat 00:03:26.042 LINK vtophys 00:03:26.042 CXX test/cpp_headers/crc64.o 00:03:26.042 LINK histogram_perf 00:03:26.042 LINK spdk_nvme_discover 00:03:26.042 LINK poller_perf 00:03:26.042 CXX test/cpp_headers/dif.o 00:03:26.042 LINK zipf 00:03:26.042 LINK env_dpdk_post_init 00:03:26.042 LINK interrupt_tgt 00:03:26.042 LINK spdk_trace_record 00:03:26.042 CXX test/cpp_headers/dma.o 00:03:26.042 CXX test/cpp_headers/endian.o 00:03:26.042 CXX test/cpp_headers/env_dpdk.o 00:03:26.042 CXX test/cpp_headers/env.o 00:03:26.042 CXX test/cpp_headers/event.o 00:03:26.042 CXX test/cpp_headers/fd_group.o 00:03:26.042 CXX test/cpp_headers/fd.o 00:03:26.042 CXX test/cpp_headers/file.o 00:03:26.042 CXX test/cpp_headers/ftl.o 00:03:26.042 CXX test/cpp_headers/gpt_spec.o 00:03:26.042 LINK nvmf_tgt 00:03:26.042 LINK ioat_perf 00:03:26.042 CXX test/cpp_headers/hexlify.o 00:03:26.042 CXX test/cpp_headers/histogram_data.o 00:03:26.042 LINK stub 00:03:26.042 LINK verify 00:03:26.042 CXX test/cpp_headers/idxd.o 00:03:26.042 LINK iscsi_tgt 00:03:26.042 LINK spdk_tgt 00:03:26.042 LINK bdev_svc 00:03:26.042 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:26.042 CXX test/cpp_headers/idxd_spec.o 00:03:26.042 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:26.042 CXX test/cpp_headers/init.o 00:03:26.303 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:26.303 CXX test/cpp_headers/ioat.o 00:03:26.303 LINK mem_callbacks 00:03:26.303 CXX test/cpp_headers/ioat_spec.o 00:03:26.303 CXX test/cpp_headers/iscsi_spec.o 00:03:26.303 CXX test/cpp_headers/json.o 00:03:26.303 CXX test/cpp_headers/jsonrpc.o 00:03:26.303 LINK spdk_dd 00:03:26.303 CXX test/cpp_headers/keyring.o 00:03:26.303 CXX test/cpp_headers/keyring_module.o 00:03:26.303 LINK spdk_trace 00:03:26.303 CXX test/cpp_headers/likely.o 00:03:26.303 LINK pci_ut 00:03:26.303 CXX test/cpp_headers/log.o 00:03:26.303 CXX test/cpp_headers/lvol.o 00:03:26.303 CXX test/cpp_headers/memory.o 00:03:26.303 CXX test/cpp_headers/mmio.o 00:03:26.303 CXX test/cpp_headers/nbd.o 00:03:26.303 CXX test/cpp_headers/notify.o 00:03:26.303 CXX test/cpp_headers/nvme.o 00:03:26.303 CXX test/cpp_headers/nvme_intel.o 00:03:26.303 CXX test/cpp_headers/nvme_ocssd.o 00:03:26.303 LINK test_dma 00:03:26.571 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:26.571 CXX test/cpp_headers/nvme_spec.o 00:03:26.571 CXX test/cpp_headers/nvme_zns.o 00:03:26.571 CXX test/cpp_headers/nvmf_cmd.o 00:03:26.571 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:26.571 CXX test/cpp_headers/nvmf.o 00:03:26.571 CXX test/cpp_headers/nvmf_spec.o 00:03:26.571 CXX test/cpp_headers/nvmf_transport.o 00:03:26.571 CXX test/cpp_headers/opal.o 00:03:26.571 CXX test/cpp_headers/opal_spec.o 00:03:26.571 CC test/event/event_perf/event_perf.o 00:03:26.571 CC test/event/reactor/reactor.o 00:03:26.571 CXX test/cpp_headers/pci_ids.o 00:03:26.571 CC test/event/reactor_perf/reactor_perf.o 00:03:26.571 CXX test/cpp_headers/pipe.o 00:03:26.571 LINK nvme_fuzz 00:03:26.571 CXX test/cpp_headers/queue.o 00:03:26.571 CXX test/cpp_headers/reduce.o 00:03:26.571 CC test/event/app_repeat/app_repeat.o 00:03:26.571 CC examples/sock/hello_world/hello_sock.o 00:03:26.571 CXX test/cpp_headers/rpc.o 00:03:26.571 CXX test/cpp_headers/scheduler.o 00:03:26.571 CXX test/cpp_headers/scsi.o 00:03:26.571 CC test/event/scheduler/scheduler.o 00:03:26.571 CC examples/vmd/lsvmd/lsvmd.o 00:03:26.571 LINK spdk_bdev 00:03:26.571 CC examples/vmd/led/led.o 00:03:26.571 CXX test/cpp_headers/scsi_spec.o 00:03:26.833 CC examples/idxd/perf/perf.o 00:03:26.833 LINK spdk_nvme 00:03:26.833 CC examples/thread/thread/thread_ex.o 00:03:26.833 CXX test/cpp_headers/sock.o 00:03:26.833 CXX test/cpp_headers/stdinc.o 00:03:26.833 CXX test/cpp_headers/string.o 00:03:26.833 CXX test/cpp_headers/thread.o 00:03:26.833 CXX test/cpp_headers/trace.o 00:03:26.833 CXX test/cpp_headers/trace_parser.o 00:03:26.833 CXX test/cpp_headers/tree.o 00:03:26.833 CXX test/cpp_headers/ublk.o 00:03:26.833 CXX test/cpp_headers/util.o 00:03:26.833 CXX test/cpp_headers/uuid.o 00:03:26.833 CXX test/cpp_headers/version.o 00:03:26.833 CXX test/cpp_headers/vfio_user_pci.o 00:03:26.833 LINK reactor 00:03:26.833 LINK event_perf 00:03:26.833 LINK reactor_perf 00:03:26.833 CXX test/cpp_headers/vfio_user_spec.o 00:03:26.833 CXX test/cpp_headers/vhost.o 00:03:26.833 CXX test/cpp_headers/vmd.o 00:03:26.833 CXX test/cpp_headers/xor.o 00:03:26.833 CXX test/cpp_headers/zipf.o 00:03:26.833 CC app/vhost/vhost.o 00:03:27.095 LINK app_repeat 00:03:27.095 LINK lsvmd 00:03:27.095 LINK vhost_fuzz 00:03:27.095 LINK memory_ut 00:03:27.095 LINK spdk_nvme_perf 00:03:27.095 LINK led 00:03:27.095 LINK spdk_nvme_identify 00:03:27.095 LINK scheduler 00:03:27.095 LINK hello_sock 00:03:27.095 LINK spdk_top 00:03:27.095 LINK thread 00:03:27.095 CC test/nvme/aer/aer.o 00:03:27.095 CC test/nvme/err_injection/err_injection.o 00:03:27.095 CC test/nvme/e2edp/nvme_dp.o 00:03:27.095 CC test/nvme/reset/reset.o 00:03:27.095 CC test/nvme/compliance/nvme_compliance.o 00:03:27.095 CC test/nvme/simple_copy/simple_copy.o 00:03:27.095 CC test/nvme/sgl/sgl.o 00:03:27.095 CC test/nvme/boot_partition/boot_partition.o 00:03:27.095 CC test/nvme/startup/startup.o 00:03:27.095 CC test/nvme/connect_stress/connect_stress.o 00:03:27.095 CC test/nvme/reserve/reserve.o 00:03:27.095 CC test/nvme/overhead/overhead.o 00:03:27.354 CC test/nvme/fused_ordering/fused_ordering.o 00:03:27.354 CC test/accel/dif/dif.o 00:03:27.354 CC test/blobfs/mkfs/mkfs.o 00:03:27.354 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:27.354 CC test/nvme/fdp/fdp.o 00:03:27.354 CC test/nvme/cuse/cuse.o 00:03:27.354 CC test/lvol/esnap/esnap.o 00:03:27.354 LINK idxd_perf 00:03:27.354 LINK vhost 00:03:27.612 LINK err_injection 00:03:27.612 LINK simple_copy 00:03:27.612 LINK reserve 00:03:27.612 LINK connect_stress 00:03:27.612 LINK startup 00:03:27.612 LINK doorbell_aers 00:03:27.612 CC examples/nvme/hotplug/hotplug.o 00:03:27.612 CC examples/nvme/reconnect/reconnect.o 00:03:27.612 LINK boot_partition 00:03:27.612 LINK sgl 00:03:27.612 LINK mkfs 00:03:27.612 LINK fused_ordering 00:03:27.612 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:27.612 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:27.612 CC examples/nvme/arbitration/arbitration.o 00:03:27.612 CC examples/nvme/hello_world/hello_world.o 00:03:27.612 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:27.612 CC examples/nvme/abort/abort.o 00:03:27.612 LINK nvme_compliance 00:03:27.612 LINK aer 00:03:27.612 LINK reset 00:03:27.612 LINK nvme_dp 00:03:27.612 LINK overhead 00:03:27.870 LINK fdp 00:03:27.870 CC examples/accel/perf/accel_perf.o 00:03:27.870 LINK dif 00:03:27.870 LINK cmb_copy 00:03:27.870 CC examples/blob/cli/blobcli.o 00:03:27.870 CC examples/blob/hello_world/hello_blob.o 00:03:27.870 LINK pmr_persistence 00:03:27.870 LINK hotplug 00:03:27.870 LINK hello_world 00:03:28.127 LINK abort 00:03:28.127 LINK arbitration 00:03:28.127 LINK reconnect 00:03:28.127 LINK hello_blob 00:03:28.127 LINK nvme_manage 00:03:28.127 CC test/bdev/bdevio/bdevio.o 00:03:28.384 LINK accel_perf 00:03:28.384 LINK blobcli 00:03:28.642 LINK iscsi_fuzz 00:03:28.642 CC examples/bdev/hello_world/hello_bdev.o 00:03:28.642 CC examples/bdev/bdevperf/bdevperf.o 00:03:28.642 LINK bdevio 00:03:28.899 LINK cuse 00:03:28.899 LINK hello_bdev 00:03:29.465 LINK bdevperf 00:03:29.724 CC examples/nvmf/nvmf/nvmf.o 00:03:29.982 LINK nvmf 00:03:32.508 LINK esnap 00:03:32.508 00:03:32.508 real 0m41.168s 00:03:32.508 user 7m23.189s 00:03:32.508 sys 1m48.715s 00:03:32.508 07:50:24 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:32.508 07:50:24 make -- common/autotest_common.sh@10 -- $ set +x 00:03:32.508 ************************************ 00:03:32.508 END TEST make 00:03:32.508 ************************************ 00:03:32.767 07:50:24 -- common/autotest_common.sh@1142 -- $ return 0 00:03:32.767 07:50:24 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:32.767 07:50:24 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:32.767 07:50:24 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:32.767 07:50:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.767 07:50:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:32.767 07:50:24 -- pm/common@44 -- $ pid=1717720 00:03:32.767 07:50:24 -- pm/common@50 -- $ kill -TERM 1717720 00:03:32.767 07:50:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.767 07:50:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:32.767 07:50:24 -- pm/common@44 -- $ pid=1717722 00:03:32.767 07:50:24 -- pm/common@50 -- $ kill -TERM 1717722 00:03:32.767 07:50:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.767 07:50:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:32.767 07:50:24 -- pm/common@44 -- $ pid=1717724 00:03:32.767 07:50:24 -- pm/common@50 -- $ kill -TERM 1717724 00:03:32.767 07:50:24 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.767 07:50:24 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:32.767 07:50:24 -- pm/common@44 -- $ pid=1717753 00:03:32.767 07:50:24 -- pm/common@50 -- $ sudo -E kill -TERM 1717753 00:03:32.767 07:50:24 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:32.767 07:50:24 -- nvmf/common.sh@7 -- # uname -s 00:03:32.767 07:50:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:32.767 07:50:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:32.767 07:50:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:32.767 07:50:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:32.767 07:50:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:32.767 07:50:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:32.767 07:50:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:32.767 07:50:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:32.767 07:50:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:32.767 07:50:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:32.767 07:50:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:32.767 07:50:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:32.767 07:50:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:32.767 07:50:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:32.767 07:50:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:32.767 07:50:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:32.767 07:50:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:32.767 07:50:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:32.767 07:50:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:32.767 07:50:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:32.767 07:50:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.767 07:50:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.767 07:50:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.767 07:50:24 -- paths/export.sh@5 -- # export PATH 00:03:32.767 07:50:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:32.767 07:50:24 -- nvmf/common.sh@47 -- # : 0 00:03:32.767 07:50:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:32.767 07:50:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:32.767 07:50:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:32.767 07:50:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:32.767 07:50:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:32.767 07:50:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:32.767 07:50:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:32.767 07:50:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:32.767 07:50:24 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:32.767 07:50:24 -- spdk/autotest.sh@32 -- # uname -s 00:03:32.767 07:50:24 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:32.767 07:50:24 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:32.767 07:50:24 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:32.767 07:50:24 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:32.767 07:50:24 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:32.767 07:50:24 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:32.767 07:50:24 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:32.767 07:50:24 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:32.767 07:50:24 -- spdk/autotest.sh@48 -- # udevadm_pid=1793413 00:03:32.767 07:50:24 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:32.767 07:50:24 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:32.767 07:50:24 -- pm/common@17 -- # local monitor 00:03:32.767 07:50:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.767 07:50:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.767 07:50:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.767 07:50:24 -- pm/common@21 -- # date +%s 00:03:32.767 07:50:24 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:32.767 07:50:24 -- pm/common@21 -- # date +%s 00:03:32.767 07:50:24 -- pm/common@25 -- # sleep 1 00:03:32.767 07:50:24 -- pm/common@21 -- # date +%s 00:03:32.767 07:50:24 -- pm/common@21 -- # date +%s 00:03:32.767 07:50:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720849824 00:03:32.767 07:50:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720849824 00:03:32.767 07:50:24 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720849824 00:03:32.767 07:50:24 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720849824 00:03:32.767 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720849824_collect-vmstat.pm.log 00:03:32.767 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720849824_collect-cpu-load.pm.log 00:03:32.767 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720849824_collect-cpu-temp.pm.log 00:03:32.767 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720849824_collect-bmc-pm.bmc.pm.log 00:03:33.703 07:50:25 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:33.703 07:50:25 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:33.703 07:50:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:33.703 07:50:25 -- common/autotest_common.sh@10 -- # set +x 00:03:33.703 07:50:25 -- spdk/autotest.sh@59 -- # create_test_list 00:03:33.703 07:50:25 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:33.703 07:50:25 -- common/autotest_common.sh@10 -- # set +x 00:03:33.703 07:50:25 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:33.703 07:50:25 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:33.703 07:50:25 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:33.703 07:50:25 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:33.703 07:50:25 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:33.703 07:50:25 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:33.703 07:50:25 -- common/autotest_common.sh@1455 -- # uname 00:03:33.703 07:50:25 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:33.703 07:50:25 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:33.703 07:50:25 -- common/autotest_common.sh@1475 -- # uname 00:03:33.703 07:50:25 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:33.703 07:50:25 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:33.703 07:50:25 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:33.703 07:50:25 -- spdk/autotest.sh@72 -- # hash lcov 00:03:33.703 07:50:25 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:33.703 07:50:25 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:33.703 --rc lcov_branch_coverage=1 00:03:33.703 --rc lcov_function_coverage=1 00:03:33.703 --rc genhtml_branch_coverage=1 00:03:33.703 --rc genhtml_function_coverage=1 00:03:33.703 --rc genhtml_legend=1 00:03:33.703 --rc geninfo_all_blocks=1 00:03:33.703 ' 00:03:33.703 07:50:25 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:33.703 --rc lcov_branch_coverage=1 00:03:33.703 --rc lcov_function_coverage=1 00:03:33.703 --rc genhtml_branch_coverage=1 00:03:33.703 --rc genhtml_function_coverage=1 00:03:33.703 --rc genhtml_legend=1 00:03:33.703 --rc geninfo_all_blocks=1 00:03:33.703 ' 00:03:33.704 07:50:25 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:33.704 --rc lcov_branch_coverage=1 00:03:33.704 --rc lcov_function_coverage=1 00:03:33.704 --rc genhtml_branch_coverage=1 00:03:33.704 --rc genhtml_function_coverage=1 00:03:33.704 --rc genhtml_legend=1 00:03:33.704 --rc geninfo_all_blocks=1 00:03:33.704 --no-external' 00:03:33.704 07:50:25 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:33.704 --rc lcov_branch_coverage=1 00:03:33.704 --rc lcov_function_coverage=1 00:03:33.704 --rc genhtml_branch_coverage=1 00:03:33.704 --rc genhtml_function_coverage=1 00:03:33.704 --rc genhtml_legend=1 00:03:33.704 --rc geninfo_all_blocks=1 00:03:33.704 --no-external' 00:03:33.704 07:50:25 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:33.962 lcov: LCOV version 1.14 00:03:33.962 07:50:25 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:39.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:39.222 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:39.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:39.223 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:39.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:39.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:39.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:39.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:39.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:39.224 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:39.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:39.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:39.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:39.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:39.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:39.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:39.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:39.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:39.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:39.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:39.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:39.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:39.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:39.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:39.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:39.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:39.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:39.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:39.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:39.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:39.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:39.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:39.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:39.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:39.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:39.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:39.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:39.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:39.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:39.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:39.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:39.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:39.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:39.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:39.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:39.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:39.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:39.480 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:39.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:39.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:39.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:39.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:39.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:39.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:39.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:39.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:39.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:39.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:39.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:39.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:39.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:39.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:39.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:39.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:39.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:39.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:39.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:39.481 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:01.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:01.398 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:07.952 07:50:58 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:07.952 07:50:58 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:07.952 07:50:58 -- common/autotest_common.sh@10 -- # set +x 00:04:07.952 07:50:58 -- spdk/autotest.sh@91 -- # rm -f 00:04:07.952 07:50:58 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:08.210 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:08.210 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:08.210 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:08.210 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:08.210 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:08.467 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:08.467 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:08.467 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:08.467 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:08.467 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:08.467 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:08.467 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:08.467 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:08.467 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:08.467 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:08.467 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:08.467 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:08.467 07:51:00 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:08.467 07:51:00 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:08.467 07:51:00 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:08.467 07:51:00 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:08.467 07:51:00 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:08.467 07:51:00 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:08.468 07:51:00 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:08.468 07:51:00 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:08.468 07:51:00 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:08.468 07:51:00 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:08.468 07:51:00 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:08.468 07:51:00 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:08.468 07:51:00 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:08.468 07:51:00 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:08.468 07:51:00 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:08.726 No valid GPT data, bailing 00:04:08.726 07:51:00 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:08.726 07:51:00 -- scripts/common.sh@391 -- # pt= 00:04:08.726 07:51:00 -- scripts/common.sh@392 -- # return 1 00:04:08.726 07:51:00 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:08.726 1+0 records in 00:04:08.726 1+0 records out 00:04:08.726 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00458824 s, 229 MB/s 00:04:08.726 07:51:00 -- spdk/autotest.sh@118 -- # sync 00:04:08.726 07:51:00 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:08.726 07:51:00 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:08.726 07:51:00 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:10.632 07:51:02 -- spdk/autotest.sh@124 -- # uname -s 00:04:10.632 07:51:02 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:10.632 07:51:02 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:10.632 07:51:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.632 07:51:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.632 07:51:02 -- common/autotest_common.sh@10 -- # set +x 00:04:10.632 ************************************ 00:04:10.632 START TEST setup.sh 00:04:10.632 ************************************ 00:04:10.632 07:51:02 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:10.632 * Looking for test storage... 00:04:10.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:10.632 07:51:02 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:10.632 07:51:02 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:10.632 07:51:02 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:10.632 07:51:02 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.632 07:51:02 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.632 07:51:02 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:10.632 ************************************ 00:04:10.632 START TEST acl 00:04:10.632 ************************************ 00:04:10.632 07:51:02 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:10.632 * Looking for test storage... 00:04:10.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:10.632 07:51:02 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:10.632 07:51:02 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:10.632 07:51:02 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:10.632 07:51:02 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:10.632 07:51:02 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:10.632 07:51:02 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:10.632 07:51:02 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:10.632 07:51:02 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:10.632 07:51:02 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:10.632 07:51:02 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:10.632 07:51:02 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:10.632 07:51:02 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:10.632 07:51:02 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:10.632 07:51:02 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:10.632 07:51:02 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:10.632 07:51:02 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:12.021 07:51:03 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:12.021 07:51:03 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:12.021 07:51:03 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.021 07:51:03 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:12.021 07:51:03 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:12.021 07:51:03 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:12.955 Hugepages 00:04:12.955 node hugesize free / total 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.955 00:04:12.955 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:12.955 07:51:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:13.213 07:51:04 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:13.213 07:51:04 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:13.213 07:51:04 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:13.213 07:51:04 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:13.213 ************************************ 00:04:13.213 START TEST denied 00:04:13.213 ************************************ 00:04:13.213 07:51:04 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:13.213 07:51:04 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:13.213 07:51:04 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:13.213 07:51:04 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:13.213 07:51:04 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:13.213 07:51:04 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:14.586 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:14.586 07:51:06 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:14.586 07:51:06 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:14.586 07:51:06 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:14.586 07:51:06 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:14.586 07:51:06 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:14.586 07:51:06 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:14.586 07:51:06 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:14.586 07:51:06 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:14.586 07:51:06 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:14.586 07:51:06 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:17.115 00:04:17.115 real 0m3.777s 00:04:17.115 user 0m1.051s 00:04:17.115 sys 0m1.850s 00:04:17.115 07:51:08 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:17.115 07:51:08 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:17.115 ************************************ 00:04:17.115 END TEST denied 00:04:17.115 ************************************ 00:04:17.115 07:51:08 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:17.115 07:51:08 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:17.115 07:51:08 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:17.115 07:51:08 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:17.115 07:51:08 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:17.115 ************************************ 00:04:17.115 START TEST allowed 00:04:17.115 ************************************ 00:04:17.115 07:51:08 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:17.115 07:51:08 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:17.115 07:51:08 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:17.115 07:51:08 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:17.115 07:51:08 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.115 07:51:08 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:19.639 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:19.639 07:51:10 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:19.639 07:51:10 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:19.639 07:51:10 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:19.639 07:51:10 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.639 07:51:10 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:21.012 00:04:21.012 real 0m3.818s 00:04:21.012 user 0m1.051s 00:04:21.012 sys 0m1.601s 00:04:21.012 07:51:12 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.012 07:51:12 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:21.012 ************************************ 00:04:21.012 END TEST allowed 00:04:21.012 ************************************ 00:04:21.012 07:51:12 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:21.012 00:04:21.012 real 0m10.247s 00:04:21.012 user 0m3.154s 00:04:21.012 sys 0m5.118s 00:04:21.012 07:51:12 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:21.012 07:51:12 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:21.012 ************************************ 00:04:21.012 END TEST acl 00:04:21.012 ************************************ 00:04:21.012 07:51:12 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:21.012 07:51:12 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:21.012 07:51:12 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.012 07:51:12 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.012 07:51:12 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:21.012 ************************************ 00:04:21.012 START TEST hugepages 00:04:21.012 ************************************ 00:04:21.012 07:51:12 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:21.012 * Looking for test storage... 00:04:21.012 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 41756468 kB' 'MemAvailable: 45263124 kB' 'Buffers: 2704 kB' 'Cached: 12209280 kB' 'SwapCached: 0 kB' 'Active: 9205864 kB' 'Inactive: 3506552 kB' 'Active(anon): 8811512 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503904 kB' 'Mapped: 169364 kB' 'Shmem: 8311080 kB' 'KReclaimable: 198748 kB' 'Slab: 571056 kB' 'SReclaimable: 198748 kB' 'SUnreclaim: 372308 kB' 'KernelStack: 12768 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562304 kB' 'Committed_AS: 9934836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195872 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.012 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.013 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:21.014 07:51:12 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:21.014 07:51:12 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:21.014 07:51:12 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:21.014 07:51:12 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:21.014 ************************************ 00:04:21.014 START TEST default_setup 00:04:21.014 ************************************ 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.014 07:51:12 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:22.384 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:22.384 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:22.384 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:22.384 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:22.384 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:22.384 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:22.384 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:22.384 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:22.384 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:22.384 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:22.384 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:22.384 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:22.384 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:22.384 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:22.384 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:22.384 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:23.317 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:23.317 07:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:23.317 07:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:23.317 07:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:23.317 07:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:23.317 07:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:23.317 07:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:23.317 07:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:23.317 07:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:23.317 07:51:14 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:23.317 07:51:14 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:23.317 07:51:14 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:23.317 07:51:14 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.317 07:51:14 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.317 07:51:14 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.317 07:51:14 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.317 07:51:14 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.317 07:51:14 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.318 07:51:14 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.318 07:51:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:14 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43851812 kB' 'MemAvailable: 47358476 kB' 'Buffers: 2704 kB' 'Cached: 12217568 kB' 'SwapCached: 0 kB' 'Active: 9232688 kB' 'Inactive: 3506552 kB' 'Active(anon): 8838336 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522352 kB' 'Mapped: 169304 kB' 'Shmem: 8319368 kB' 'KReclaimable: 198764 kB' 'Slab: 570744 kB' 'SReclaimable: 198764 kB' 'SUnreclaim: 371980 kB' 'KernelStack: 12848 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9963688 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:23.318 07:51:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:14 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:14 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.318 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43851136 kB' 'MemAvailable: 47357800 kB' 'Buffers: 2704 kB' 'Cached: 12217568 kB' 'SwapCached: 0 kB' 'Active: 9233400 kB' 'Inactive: 3506552 kB' 'Active(anon): 8839048 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523088 kB' 'Mapped: 169304 kB' 'Shmem: 8319368 kB' 'KReclaimable: 198764 kB' 'Slab: 570688 kB' 'SReclaimable: 198764 kB' 'SUnreclaim: 371924 kB' 'KernelStack: 12912 kB' 'PageTables: 8192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9964796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.319 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.320 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.321 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43847404 kB' 'MemAvailable: 47354068 kB' 'Buffers: 2704 kB' 'Cached: 12217572 kB' 'SwapCached: 0 kB' 'Active: 9236080 kB' 'Inactive: 3506552 kB' 'Active(anon): 8841728 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525764 kB' 'Mapped: 169800 kB' 'Shmem: 8319372 kB' 'KReclaimable: 198764 kB' 'Slab: 570736 kB' 'SReclaimable: 198764 kB' 'SUnreclaim: 371972 kB' 'KernelStack: 12832 kB' 'PageTables: 7972 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9967724 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195952 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.583 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.584 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:23.585 nr_hugepages=1024 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:23.585 resv_hugepages=0 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:23.585 surplus_hugepages=0 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:23.585 anon_hugepages=0 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43842896 kB' 'MemAvailable: 47349560 kB' 'Buffers: 2704 kB' 'Cached: 12217608 kB' 'SwapCached: 0 kB' 'Active: 9238280 kB' 'Inactive: 3506552 kB' 'Active(anon): 8843928 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 527884 kB' 'Mapped: 170216 kB' 'Shmem: 8319408 kB' 'KReclaimable: 198764 kB' 'Slab: 570736 kB' 'SReclaimable: 198764 kB' 'SUnreclaim: 371972 kB' 'KernelStack: 12832 kB' 'PageTables: 8004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9969868 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195940 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.585 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.586 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25811048 kB' 'MemUsed: 7018836 kB' 'SwapCached: 0 kB' 'Active: 3663644 kB' 'Inactive: 110044 kB' 'Active(anon): 3552756 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3538500 kB' 'Mapped: 37932 kB' 'AnonPages: 238396 kB' 'Shmem: 3317568 kB' 'KernelStack: 7672 kB' 'PageTables: 4456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95908 kB' 'Slab: 319188 kB' 'SReclaimable: 95908 kB' 'SUnreclaim: 223280 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.587 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:23.588 node0=1024 expecting 1024 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:23.588 00:04:23.588 real 0m2.496s 00:04:23.588 user 0m0.719s 00:04:23.588 sys 0m0.886s 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:23.588 07:51:15 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:23.588 ************************************ 00:04:23.588 END TEST default_setup 00:04:23.588 ************************************ 00:04:23.588 07:51:15 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:23.588 07:51:15 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:23.588 07:51:15 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:23.588 07:51:15 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:23.588 07:51:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:23.588 ************************************ 00:04:23.588 START TEST per_node_1G_alloc 00:04:23.588 ************************************ 00:04:23.588 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:23.588 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:23.588 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:23.588 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:23.588 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:23.588 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:23.588 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:23.588 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:23.588 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:23.588 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:23.588 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:23.588 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:23.588 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:23.588 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:23.588 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:23.588 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:23.588 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:23.588 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:23.589 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:23.589 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:23.589 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:23.589 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:23.589 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:23.589 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:23.589 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:23.589 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:23.589 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:23.589 07:51:15 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:24.521 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:24.521 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:24.521 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:24.521 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:24.521 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:24.521 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:24.521 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:24.521 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:24.521 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:24.521 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:24.521 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:24.521 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:24.521 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:24.521 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:24.521 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:24.521 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:24.521 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43843684 kB' 'MemAvailable: 47350348 kB' 'Buffers: 2704 kB' 'Cached: 12217676 kB' 'SwapCached: 0 kB' 'Active: 9233120 kB' 'Inactive: 3506552 kB' 'Active(anon): 8838768 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522580 kB' 'Mapped: 169520 kB' 'Shmem: 8319476 kB' 'KReclaimable: 198764 kB' 'Slab: 570776 kB' 'SReclaimable: 198764 kB' 'SUnreclaim: 372012 kB' 'KernelStack: 12816 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9963932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.785 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.786 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43844428 kB' 'MemAvailable: 47351092 kB' 'Buffers: 2704 kB' 'Cached: 12217676 kB' 'SwapCached: 0 kB' 'Active: 9232988 kB' 'Inactive: 3506552 kB' 'Active(anon): 8838636 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522452 kB' 'Mapped: 169464 kB' 'Shmem: 8319476 kB' 'KReclaimable: 198764 kB' 'Slab: 570752 kB' 'SReclaimable: 198764 kB' 'SUnreclaim: 371988 kB' 'KernelStack: 12848 kB' 'PageTables: 7940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9963948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.787 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.788 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43844900 kB' 'MemAvailable: 47351564 kB' 'Buffers: 2704 kB' 'Cached: 12217696 kB' 'SwapCached: 0 kB' 'Active: 9232868 kB' 'Inactive: 3506552 kB' 'Active(anon): 8838516 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522268 kB' 'Mapped: 169388 kB' 'Shmem: 8319496 kB' 'KReclaimable: 198764 kB' 'Slab: 570728 kB' 'SReclaimable: 198764 kB' 'SUnreclaim: 371964 kB' 'KernelStack: 12832 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9963972 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.789 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.790 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:24.791 nr_hugepages=1024 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:24.791 resv_hugepages=0 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:24.791 surplus_hugepages=0 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:24.791 anon_hugepages=0 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43844900 kB' 'MemAvailable: 47351564 kB' 'Buffers: 2704 kB' 'Cached: 12217720 kB' 'SwapCached: 0 kB' 'Active: 9232856 kB' 'Inactive: 3506552 kB' 'Active(anon): 8838504 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522236 kB' 'Mapped: 169388 kB' 'Shmem: 8319520 kB' 'KReclaimable: 198764 kB' 'Slab: 570728 kB' 'SReclaimable: 198764 kB' 'SUnreclaim: 371964 kB' 'KernelStack: 12816 kB' 'PageTables: 7816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9963996 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.791 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.792 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26857764 kB' 'MemUsed: 5972120 kB' 'SwapCached: 0 kB' 'Active: 3664160 kB' 'Inactive: 110044 kB' 'Active(anon): 3553272 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3538588 kB' 'Mapped: 37668 kB' 'AnonPages: 238828 kB' 'Shmem: 3317656 kB' 'KernelStack: 7688 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95908 kB' 'Slab: 319176 kB' 'SReclaimable: 95908 kB' 'SUnreclaim: 223268 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.793 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16986884 kB' 'MemUsed: 10724940 kB' 'SwapCached: 0 kB' 'Active: 5568756 kB' 'Inactive: 3396508 kB' 'Active(anon): 5285292 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396508 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8681860 kB' 'Mapped: 131720 kB' 'AnonPages: 283512 kB' 'Shmem: 5001888 kB' 'KernelStack: 5176 kB' 'PageTables: 3472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102856 kB' 'Slab: 251552 kB' 'SReclaimable: 102856 kB' 'SUnreclaim: 148696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.794 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:24.795 node0=512 expecting 512 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:24.795 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:25.056 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:25.056 node1=512 expecting 512 00:04:25.056 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:25.056 00:04:25.056 real 0m1.345s 00:04:25.056 user 0m0.573s 00:04:25.056 sys 0m0.730s 00:04:25.056 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.056 07:51:16 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:25.056 ************************************ 00:04:25.056 END TEST per_node_1G_alloc 00:04:25.056 ************************************ 00:04:25.056 07:51:16 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:25.056 07:51:16 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:25.056 07:51:16 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.056 07:51:16 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.056 07:51:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:25.056 ************************************ 00:04:25.056 START TEST even_2G_alloc 00:04:25.056 ************************************ 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:25.056 07:51:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:25.996 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:25.996 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:25.996 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:25.996 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:25.996 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:25.996 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:25.996 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:25.996 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:25.996 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:25.996 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:25.996 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:25.996 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:25.996 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:25.996 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:25.996 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:25.996 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:25.996 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43843228 kB' 'MemAvailable: 47349892 kB' 'Buffers: 2704 kB' 'Cached: 12217816 kB' 'SwapCached: 0 kB' 'Active: 9234796 kB' 'Inactive: 3506552 kB' 'Active(anon): 8840444 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524068 kB' 'Mapped: 169432 kB' 'Shmem: 8319616 kB' 'KReclaimable: 198764 kB' 'Slab: 570528 kB' 'SReclaimable: 198764 kB' 'SUnreclaim: 371764 kB' 'KernelStack: 12912 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9963864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196256 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.261 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.262 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43843924 kB' 'MemAvailable: 47350588 kB' 'Buffers: 2704 kB' 'Cached: 12217824 kB' 'SwapCached: 0 kB' 'Active: 9234372 kB' 'Inactive: 3506552 kB' 'Active(anon): 8840020 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523624 kB' 'Mapped: 169384 kB' 'Shmem: 8319624 kB' 'KReclaimable: 198764 kB' 'Slab: 570568 kB' 'SReclaimable: 198764 kB' 'SUnreclaim: 371804 kB' 'KernelStack: 12896 kB' 'PageTables: 8116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9963880 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.263 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43843692 kB' 'MemAvailable: 47350356 kB' 'Buffers: 2704 kB' 'Cached: 12217824 kB' 'SwapCached: 0 kB' 'Active: 9232856 kB' 'Inactive: 3506552 kB' 'Active(anon): 8838504 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522064 kB' 'Mapped: 169364 kB' 'Shmem: 8319624 kB' 'KReclaimable: 198764 kB' 'Slab: 570568 kB' 'SReclaimable: 198764 kB' 'SUnreclaim: 371804 kB' 'KernelStack: 12848 kB' 'PageTables: 7920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9964040 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.264 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.265 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.266 nr_hugepages=1024 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.266 resv_hugepages=0 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.266 surplus_hugepages=0 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.266 anon_hugepages=0 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43843692 kB' 'MemAvailable: 47350356 kB' 'Buffers: 2704 kB' 'Cached: 12217828 kB' 'SwapCached: 0 kB' 'Active: 9233240 kB' 'Inactive: 3506552 kB' 'Active(anon): 8838888 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522472 kB' 'Mapped: 169364 kB' 'Shmem: 8319628 kB' 'KReclaimable: 198764 kB' 'Slab: 570568 kB' 'SReclaimable: 198764 kB' 'SUnreclaim: 371804 kB' 'KernelStack: 12880 kB' 'PageTables: 8032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9964428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.266 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.267 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:26.268 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26854012 kB' 'MemUsed: 5975872 kB' 'SwapCached: 0 kB' 'Active: 3664312 kB' 'Inactive: 110044 kB' 'Active(anon): 3553424 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3538648 kB' 'Mapped: 37676 kB' 'AnonPages: 238840 kB' 'Shmem: 3317716 kB' 'KernelStack: 7688 kB' 'PageTables: 4548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95908 kB' 'Slab: 319120 kB' 'SReclaimable: 95908 kB' 'SUnreclaim: 223212 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.528 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:17 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16990552 kB' 'MemUsed: 10721272 kB' 'SwapCached: 0 kB' 'Active: 5569172 kB' 'Inactive: 3396508 kB' 'Active(anon): 5285708 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396508 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8681960 kB' 'Mapped: 131732 kB' 'AnonPages: 283848 kB' 'Shmem: 5001988 kB' 'KernelStack: 5192 kB' 'PageTables: 3456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102856 kB' 'Slab: 251448 kB' 'SReclaimable: 102856 kB' 'SUnreclaim: 148592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.529 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.530 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:26.531 node0=512 expecting 512 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:26.531 node1=512 expecting 512 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:26.531 00:04:26.531 real 0m1.471s 00:04:26.531 user 0m0.599s 00:04:26.531 sys 0m0.822s 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:26.531 07:51:18 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:26.531 ************************************ 00:04:26.531 END TEST even_2G_alloc 00:04:26.531 ************************************ 00:04:26.531 07:51:18 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:26.531 07:51:18 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:26.531 07:51:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:26.531 07:51:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:26.531 07:51:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:26.531 ************************************ 00:04:26.531 START TEST odd_alloc 00:04:26.531 ************************************ 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.531 07:51:18 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:27.466 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:27.466 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:27.466 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:27.466 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:27.466 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:27.466 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:27.466 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:27.466 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:27.466 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:27.466 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:27.466 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:27.466 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:27.466 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:27.466 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:27.466 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:27.466 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:27.466 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:27.729 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:27.729 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:27.729 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:27.729 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:27.729 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:27.729 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:27.729 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:27.729 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:27.729 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:27.729 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:27.729 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:27.729 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.729 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.729 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.729 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.729 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.729 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.729 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.729 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.729 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43851072 kB' 'MemAvailable: 47357736 kB' 'Buffers: 2704 kB' 'Cached: 12217952 kB' 'SwapCached: 0 kB' 'Active: 9230024 kB' 'Inactive: 3506552 kB' 'Active(anon): 8835672 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519144 kB' 'Mapped: 168584 kB' 'Shmem: 8319752 kB' 'KReclaimable: 198764 kB' 'Slab: 570504 kB' 'SReclaimable: 198764 kB' 'SUnreclaim: 371740 kB' 'KernelStack: 12784 kB' 'PageTables: 7612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9950416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.730 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43853132 kB' 'MemAvailable: 47359796 kB' 'Buffers: 2704 kB' 'Cached: 12217956 kB' 'SwapCached: 0 kB' 'Active: 9229792 kB' 'Inactive: 3506552 kB' 'Active(anon): 8835440 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518944 kB' 'Mapped: 168528 kB' 'Shmem: 8319756 kB' 'KReclaimable: 198764 kB' 'Slab: 570504 kB' 'SReclaimable: 198764 kB' 'SUnreclaim: 371740 kB' 'KernelStack: 12800 kB' 'PageTables: 7588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9950432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.731 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.732 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43854216 kB' 'MemAvailable: 47360880 kB' 'Buffers: 2704 kB' 'Cached: 12217972 kB' 'SwapCached: 0 kB' 'Active: 9229836 kB' 'Inactive: 3506552 kB' 'Active(anon): 8835484 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 518884 kB' 'Mapped: 168528 kB' 'Shmem: 8319772 kB' 'KReclaimable: 198764 kB' 'Slab: 570500 kB' 'SReclaimable: 198764 kB' 'SUnreclaim: 371736 kB' 'KernelStack: 12800 kB' 'PageTables: 7596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9950452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.733 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.734 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:27.735 nr_hugepages=1025 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:27.735 resv_hugepages=0 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:27.735 surplus_hugepages=0 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:27.735 anon_hugepages=0 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43849936 kB' 'MemAvailable: 47356600 kB' 'Buffers: 2704 kB' 'Cached: 12217992 kB' 'SwapCached: 0 kB' 'Active: 9230144 kB' 'Inactive: 3506552 kB' 'Active(anon): 8835792 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519232 kB' 'Mapped: 168528 kB' 'Shmem: 8319792 kB' 'KReclaimable: 198764 kB' 'Slab: 570500 kB' 'SReclaimable: 198764 kB' 'SUnreclaim: 371736 kB' 'KernelStack: 12832 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609856 kB' 'Committed_AS: 9952852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.735 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26852312 kB' 'MemUsed: 5977572 kB' 'SwapCached: 0 kB' 'Active: 3661604 kB' 'Inactive: 110044 kB' 'Active(anon): 3550716 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3538720 kB' 'Mapped: 36924 kB' 'AnonPages: 236092 kB' 'Shmem: 3317788 kB' 'KernelStack: 7640 kB' 'PageTables: 4256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95908 kB' 'Slab: 319068 kB' 'SReclaimable: 95908 kB' 'SUnreclaim: 223160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:27.736 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.737 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 16998536 kB' 'MemUsed: 10713288 kB' 'SwapCached: 0 kB' 'Active: 5569416 kB' 'Inactive: 3396508 kB' 'Active(anon): 5285952 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396508 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8681996 kB' 'Mapped: 131652 kB' 'AnonPages: 283996 kB' 'Shmem: 5002024 kB' 'KernelStack: 5432 kB' 'PageTables: 4060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102856 kB' 'Slab: 251432 kB' 'SReclaimable: 102856 kB' 'SUnreclaim: 148576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.738 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:27.739 node0=512 expecting 513 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:27.739 node1=513 expecting 512 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:27.739 00:04:27.739 real 0m1.371s 00:04:27.739 user 0m0.566s 00:04:27.739 sys 0m0.757s 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:27.739 07:51:19 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:27.739 ************************************ 00:04:27.739 END TEST odd_alloc 00:04:27.739 ************************************ 00:04:27.998 07:51:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:27.998 07:51:19 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:27.998 07:51:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:27.998 07:51:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.998 07:51:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:27.998 ************************************ 00:04:27.998 START TEST custom_alloc 00:04:27.998 ************************************ 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:27.998 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:27.999 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:27.999 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:27.999 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:27.999 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:27.999 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:27.999 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:27.999 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:27.999 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:27.999 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:27.999 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:27.999 07:51:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:27.999 07:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:27.999 07:51:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:28.934 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:28.934 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:28.934 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:28.934 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:28.934 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:28.934 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:28.934 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:28.934 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:28.934 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:28.934 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:28.935 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:28.935 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:28.935 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:28.935 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:28.935 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:28.935 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:28.935 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:29.198 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:29.198 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:29.198 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:29.198 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:29.198 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:29.198 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:29.198 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:29.198 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:29.198 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:29.198 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:29.198 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:29.198 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.198 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.198 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.198 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.198 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.198 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.198 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.198 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.198 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.198 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42806592 kB' 'MemAvailable: 46313260 kB' 'Buffers: 2704 kB' 'Cached: 12218084 kB' 'SwapCached: 0 kB' 'Active: 9235300 kB' 'Inactive: 3506552 kB' 'Active(anon): 8840948 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524400 kB' 'Mapped: 169040 kB' 'Shmem: 8319884 kB' 'KReclaimable: 198772 kB' 'Slab: 570712 kB' 'SReclaimable: 198772 kB' 'SUnreclaim: 371940 kB' 'KernelStack: 12800 kB' 'PageTables: 7588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9955204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.199 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42806592 kB' 'MemAvailable: 46313260 kB' 'Buffers: 2704 kB' 'Cached: 12218084 kB' 'SwapCached: 0 kB' 'Active: 9236324 kB' 'Inactive: 3506552 kB' 'Active(anon): 8841972 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525396 kB' 'Mapped: 169104 kB' 'Shmem: 8319884 kB' 'KReclaimable: 198772 kB' 'Slab: 570704 kB' 'SReclaimable: 198772 kB' 'SUnreclaim: 371932 kB' 'KernelStack: 12768 kB' 'PageTables: 7408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9956816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196180 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.200 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.201 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.202 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42808316 kB' 'MemAvailable: 46314984 kB' 'Buffers: 2704 kB' 'Cached: 12218104 kB' 'SwapCached: 0 kB' 'Active: 9232180 kB' 'Inactive: 3506552 kB' 'Active(anon): 8837828 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521260 kB' 'Mapped: 169164 kB' 'Shmem: 8319904 kB' 'KReclaimable: 198772 kB' 'Slab: 570704 kB' 'SReclaimable: 198772 kB' 'SUnreclaim: 371932 kB' 'KernelStack: 12784 kB' 'PageTables: 7440 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9952604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.203 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.204 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:29.205 nr_hugepages=1536 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:29.205 resv_hugepages=0 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:29.205 surplus_hugepages=0 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:29.205 anon_hugepages=0 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 42805500 kB' 'MemAvailable: 46312168 kB' 'Buffers: 2704 kB' 'Cached: 12218120 kB' 'SwapCached: 0 kB' 'Active: 9235952 kB' 'Inactive: 3506552 kB' 'Active(anon): 8841600 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525016 kB' 'Mapped: 168964 kB' 'Shmem: 8319920 kB' 'KReclaimable: 198772 kB' 'Slab: 570676 kB' 'SReclaimable: 198772 kB' 'SUnreclaim: 371904 kB' 'KernelStack: 12784 kB' 'PageTables: 7428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086592 kB' 'Committed_AS: 9956860 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196132 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.205 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.206 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:29.207 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26858340 kB' 'MemUsed: 5971544 kB' 'SwapCached: 0 kB' 'Active: 3662468 kB' 'Inactive: 110044 kB' 'Active(anon): 3551580 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3538772 kB' 'Mapped: 37172 kB' 'AnonPages: 236080 kB' 'Shmem: 3317840 kB' 'KernelStack: 7624 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95916 kB' 'Slab: 319292 kB' 'SReclaimable: 95916 kB' 'SUnreclaim: 223376 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.208 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.209 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711824 kB' 'MemFree: 15951820 kB' 'MemUsed: 11760004 kB' 'SwapCached: 0 kB' 'Active: 5569256 kB' 'Inactive: 3396508 kB' 'Active(anon): 5285792 kB' 'Inactive(anon): 0 kB' 'Active(file): 283464 kB' 'Inactive(file): 3396508 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8682096 kB' 'Mapped: 131764 kB' 'AnonPages: 283796 kB' 'Shmem: 5002124 kB' 'KernelStack: 5224 kB' 'PageTables: 3468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102856 kB' 'Slab: 251376 kB' 'SReclaimable: 102856 kB' 'SUnreclaim: 148520 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.210 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:29.211 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:29.469 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:29.469 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:29.469 07:51:20 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:29.469 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:29.469 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.469 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.469 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.469 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:29.469 node0=512 expecting 512 00:04:29.469 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:29.469 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:29.469 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:29.469 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:29.469 node1=1024 expecting 1024 00:04:29.469 07:51:20 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:29.469 00:04:29.469 real 0m1.430s 00:04:29.469 user 0m0.573s 00:04:29.469 sys 0m0.800s 00:04:29.469 07:51:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.469 07:51:20 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:29.469 ************************************ 00:04:29.469 END TEST custom_alloc 00:04:29.469 ************************************ 00:04:29.469 07:51:20 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:29.470 07:51:20 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:29.470 07:51:20 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.470 07:51:20 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.470 07:51:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:29.470 ************************************ 00:04:29.470 START TEST no_shrink_alloc 00:04:29.470 ************************************ 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:29.470 07:51:20 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:30.403 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:30.403 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:30.403 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:30.403 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:30.403 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:30.403 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:30.403 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:30.403 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:30.403 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:30.403 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:30.403 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:30.403 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:30.403 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:30.403 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:30.403 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:30.403 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:30.403 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43832968 kB' 'MemAvailable: 47339636 kB' 'Buffers: 2704 kB' 'Cached: 12218208 kB' 'SwapCached: 0 kB' 'Active: 9231164 kB' 'Inactive: 3506552 kB' 'Active(anon): 8836812 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520128 kB' 'Mapped: 168660 kB' 'Shmem: 8320008 kB' 'KReclaimable: 198772 kB' 'Slab: 570912 kB' 'SReclaimable: 198772 kB' 'SUnreclaim: 372140 kB' 'KernelStack: 12848 kB' 'PageTables: 7604 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9951016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.667 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43832968 kB' 'MemAvailable: 47339636 kB' 'Buffers: 2704 kB' 'Cached: 12218208 kB' 'SwapCached: 0 kB' 'Active: 9230828 kB' 'Inactive: 3506552 kB' 'Active(anon): 8836476 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519780 kB' 'Mapped: 168604 kB' 'Shmem: 8320008 kB' 'KReclaimable: 198772 kB' 'Slab: 570920 kB' 'SReclaimable: 198772 kB' 'SUnreclaim: 372148 kB' 'KernelStack: 12864 kB' 'PageTables: 7588 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9951032 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.668 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43832476 kB' 'MemAvailable: 47339144 kB' 'Buffers: 2704 kB' 'Cached: 12218232 kB' 'SwapCached: 0 kB' 'Active: 9230900 kB' 'Inactive: 3506552 kB' 'Active(anon): 8836548 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519876 kB' 'Mapped: 168528 kB' 'Shmem: 8320032 kB' 'KReclaimable: 198772 kB' 'Slab: 570928 kB' 'SReclaimable: 198772 kB' 'SUnreclaim: 372156 kB' 'KernelStack: 12928 kB' 'PageTables: 7756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9950816 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.669 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:30.670 nr_hugepages=1024 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.670 resv_hugepages=0 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.670 surplus_hugepages=0 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.670 anon_hugepages=0 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43832860 kB' 'MemAvailable: 47339528 kB' 'Buffers: 2704 kB' 'Cached: 12218252 kB' 'SwapCached: 0 kB' 'Active: 9230656 kB' 'Inactive: 3506552 kB' 'Active(anon): 8836304 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519588 kB' 'Mapped: 168528 kB' 'Shmem: 8320052 kB' 'KReclaimable: 198772 kB' 'Slab: 570928 kB' 'SReclaimable: 198772 kB' 'SUnreclaim: 372156 kB' 'KernelStack: 12864 kB' 'PageTables: 7544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9950840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.670 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25791328 kB' 'MemUsed: 7038556 kB' 'SwapCached: 0 kB' 'Active: 3662244 kB' 'Inactive: 110044 kB' 'Active(anon): 3551356 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3538784 kB' 'Mapped: 36904 kB' 'AnonPages: 236756 kB' 'Shmem: 3317852 kB' 'KernelStack: 7640 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95916 kB' 'Slab: 319296 kB' 'SReclaimable: 95916 kB' 'SUnreclaim: 223380 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.671 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:30.672 node0=1024 expecting 1024 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.672 07:51:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:32.049 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:32.049 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:32.049 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:32.049 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:32.049 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:32.049 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:32.049 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:32.049 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:32.049 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:32.049 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:32.049 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:32.049 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:32.049 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:32.049 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:32.049 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:32.049 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:32.049 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:32.049 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43837384 kB' 'MemAvailable: 47344052 kB' 'Buffers: 2704 kB' 'Cached: 12218324 kB' 'SwapCached: 0 kB' 'Active: 9231620 kB' 'Inactive: 3506552 kB' 'Active(anon): 8837268 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520472 kB' 'Mapped: 168600 kB' 'Shmem: 8320124 kB' 'KReclaimable: 198772 kB' 'Slab: 570956 kB' 'SReclaimable: 198772 kB' 'SUnreclaim: 372184 kB' 'KernelStack: 12944 kB' 'PageTables: 7740 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9951388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.049 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.050 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43838072 kB' 'MemAvailable: 47344740 kB' 'Buffers: 2704 kB' 'Cached: 12218324 kB' 'SwapCached: 0 kB' 'Active: 9231396 kB' 'Inactive: 3506552 kB' 'Active(anon): 8837044 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520208 kB' 'Mapped: 168540 kB' 'Shmem: 8320124 kB' 'KReclaimable: 198772 kB' 'Slab: 570956 kB' 'SReclaimable: 198772 kB' 'SUnreclaim: 372184 kB' 'KernelStack: 12928 kB' 'PageTables: 7688 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9951408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.051 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.052 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43839404 kB' 'MemAvailable: 47346072 kB' 'Buffers: 2704 kB' 'Cached: 12218340 kB' 'SwapCached: 0 kB' 'Active: 9231228 kB' 'Inactive: 3506552 kB' 'Active(anon): 8836876 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520004 kB' 'Mapped: 168540 kB' 'Shmem: 8320140 kB' 'KReclaimable: 198772 kB' 'Slab: 570980 kB' 'SReclaimable: 198772 kB' 'SUnreclaim: 372208 kB' 'KernelStack: 12912 kB' 'PageTables: 7660 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9951428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.053 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.054 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:32.055 nr_hugepages=1024 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:32.055 resv_hugepages=0 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:32.055 surplus_hugepages=0 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:32.055 anon_hugepages=0 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541708 kB' 'MemFree: 43841224 kB' 'MemAvailable: 47347892 kB' 'Buffers: 2704 kB' 'Cached: 12218368 kB' 'SwapCached: 0 kB' 'Active: 9231168 kB' 'Inactive: 3506552 kB' 'Active(anon): 8836816 kB' 'Inactive(anon): 0 kB' 'Active(file): 394352 kB' 'Inactive(file): 3506552 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 519940 kB' 'Mapped: 168540 kB' 'Shmem: 8320168 kB' 'KReclaimable: 198772 kB' 'Slab: 570980 kB' 'SReclaimable: 198772 kB' 'SUnreclaim: 372208 kB' 'KernelStack: 12912 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610880 kB' 'Committed_AS: 9951452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 35328 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1805916 kB' 'DirectMap2M: 15939584 kB' 'DirectMap1G: 51380224 kB' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.055 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:32.056 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25788784 kB' 'MemUsed: 7041100 kB' 'SwapCached: 0 kB' 'Active: 3663292 kB' 'Inactive: 110044 kB' 'Active(anon): 3552404 kB' 'Inactive(anon): 0 kB' 'Active(file): 110888 kB' 'Inactive(file): 110044 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3538856 kB' 'Mapped: 36904 kB' 'AnonPages: 237736 kB' 'Shmem: 3317924 kB' 'KernelStack: 7704 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 95916 kB' 'Slab: 319428 kB' 'SReclaimable: 95916 kB' 'SUnreclaim: 223512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.057 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:32.058 node0=1024 expecting 1024 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:32.058 00:04:32.058 real 0m2.772s 00:04:32.058 user 0m1.155s 00:04:32.058 sys 0m1.526s 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.058 07:51:23 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:32.058 ************************************ 00:04:32.058 END TEST no_shrink_alloc 00:04:32.058 ************************************ 00:04:32.058 07:51:23 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:32.058 07:51:23 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:32.058 07:51:23 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:32.058 07:51:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:32.058 07:51:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.058 07:51:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.058 07:51:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.058 07:51:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.316 07:51:23 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:32.316 07:51:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.316 07:51:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.316 07:51:23 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:32.316 07:51:23 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:32.316 07:51:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:32.316 07:51:23 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:32.316 00:04:32.316 real 0m11.280s 00:04:32.316 user 0m4.357s 00:04:32.316 sys 0m5.764s 00:04:32.316 07:51:23 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:32.316 07:51:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:32.316 ************************************ 00:04:32.316 END TEST hugepages 00:04:32.316 ************************************ 00:04:32.316 07:51:23 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:32.316 07:51:23 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:32.316 07:51:23 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:32.316 07:51:23 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:32.316 07:51:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:32.316 ************************************ 00:04:32.316 START TEST driver 00:04:32.316 ************************************ 00:04:32.316 07:51:23 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:32.316 * Looking for test storage... 00:04:32.316 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:32.316 07:51:23 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:32.316 07:51:23 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:32.316 07:51:23 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:34.840 07:51:26 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:34.840 07:51:26 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.840 07:51:26 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.840 07:51:26 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:34.840 ************************************ 00:04:34.840 START TEST guess_driver 00:04:34.840 ************************************ 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:34.840 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:34.840 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:34.840 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:34.840 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:34.840 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:34.840 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:34.840 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:34.840 Looking for driver=vfio-pci 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.840 07:51:26 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:36.212 07:51:27 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.143 07:51:28 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:37.143 07:51:28 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:37.143 07:51:28 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:37.143 07:51:28 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:37.143 07:51:28 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:37.143 07:51:28 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:37.143 07:51:28 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:39.688 00:04:39.688 real 0m4.782s 00:04:39.688 user 0m1.082s 00:04:39.688 sys 0m1.816s 00:04:39.688 07:51:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.688 07:51:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:39.688 ************************************ 00:04:39.688 END TEST guess_driver 00:04:39.688 ************************************ 00:04:39.688 07:51:31 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:39.688 00:04:39.688 real 0m7.448s 00:04:39.688 user 0m1.670s 00:04:39.688 sys 0m2.907s 00:04:39.688 07:51:31 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:39.688 07:51:31 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:39.688 ************************************ 00:04:39.688 END TEST driver 00:04:39.688 ************************************ 00:04:39.688 07:51:31 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:39.688 07:51:31 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:39.688 07:51:31 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:39.688 07:51:31 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:39.688 07:51:31 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:39.688 ************************************ 00:04:39.688 START TEST devices 00:04:39.688 ************************************ 00:04:39.688 07:51:31 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:39.688 * Looking for test storage... 00:04:39.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:39.688 07:51:31 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:39.688 07:51:31 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:39.688 07:51:31 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.688 07:51:31 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:41.587 07:51:32 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:41.587 07:51:32 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:41.587 07:51:32 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:41.587 07:51:32 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:41.587 07:51:32 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:41.587 07:51:32 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:41.587 07:51:32 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:41.587 07:51:32 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:41.587 07:51:32 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:41.587 07:51:32 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:41.587 07:51:32 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:41.587 07:51:32 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:41.587 07:51:32 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:41.587 07:51:32 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:41.587 07:51:32 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:41.587 07:51:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:41.587 07:51:32 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:41.587 07:51:32 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:04:41.587 07:51:32 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:41.587 07:51:32 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:41.587 07:51:32 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:41.587 07:51:32 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:41.587 No valid GPT data, bailing 00:04:41.587 07:51:32 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:41.587 07:51:32 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:41.587 07:51:32 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:41.587 07:51:32 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:41.587 07:51:32 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:41.587 07:51:32 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:41.587 07:51:32 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:41.587 07:51:32 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:41.587 07:51:32 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:41.587 07:51:32 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:04:41.587 07:51:32 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:41.587 07:51:32 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:41.587 07:51:32 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:41.587 07:51:32 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.587 07:51:32 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.587 07:51:32 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:41.587 ************************************ 00:04:41.587 START TEST nvme_mount 00:04:41.587 ************************************ 00:04:41.587 07:51:32 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:41.587 07:51:32 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:41.587 07:51:32 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:41.587 07:51:32 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:41.587 07:51:32 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:41.587 07:51:32 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:41.587 07:51:32 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:41.587 07:51:32 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:41.587 07:51:32 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:41.587 07:51:32 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:41.587 07:51:32 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:41.587 07:51:32 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:41.587 07:51:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:41.587 07:51:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:41.587 07:51:32 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:41.587 07:51:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:41.587 07:51:32 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:41.587 07:51:32 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:41.587 07:51:32 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:41.587 07:51:32 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:42.520 Creating new GPT entries in memory. 00:04:42.520 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:42.520 other utilities. 00:04:42.520 07:51:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:42.520 07:51:33 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:42.520 07:51:33 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:42.520 07:51:33 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:42.520 07:51:33 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:43.455 Creating new GPT entries in memory. 00:04:43.455 The operation has completed successfully. 00:04:43.455 07:51:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:43.455 07:51:34 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:43.455 07:51:34 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1814094 00:04:43.455 07:51:35 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.455 07:51:35 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:43.455 07:51:35 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.455 07:51:35 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:43.455 07:51:35 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:43.455 07:51:35 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.455 07:51:35 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.455 07:51:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:43.455 07:51:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:43.455 07:51:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:43.455 07:51:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:43.455 07:51:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:43.455 07:51:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:43.455 07:51:35 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:43.455 07:51:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:43.455 07:51:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:43.455 07:51:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:43.455 07:51:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:43.455 07:51:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.455 07:51:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.830 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.831 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:44.831 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.831 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.831 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:44.831 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.831 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:44.831 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:44.831 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:44.831 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.831 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.831 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:44.831 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:44.831 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:44.831 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:44.831 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:45.090 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:45.090 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:45.090 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:45.090 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:45.090 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:45.090 07:51:36 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:45.090 07:51:36 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.090 07:51:36 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:45.090 07:51:36 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:45.090 07:51:36 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.090 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:45.090 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:45.090 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:45.090 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:45.090 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:45.090 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:45.090 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:45.090 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:45.090 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:45.090 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:45.090 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:45.090 07:51:36 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:45.090 07:51:36 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.090 07:51:36 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:46.023 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.023 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:46.023 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:46.023 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.023 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.023 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.023 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.023 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.023 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.023 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.023 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.023 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.023 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.023 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.023 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.024 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.024 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.024 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.024 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.024 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.024 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.024 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.024 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.024 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.024 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.024 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.024 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.024 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.024 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.024 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.024 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.024 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.024 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.024 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.024 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:46.024 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.282 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:46.282 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:46.282 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.282 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:46.282 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:46.282 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:46.282 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:04:46.282 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:46.282 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:46.282 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:46.283 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:46.283 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:46.283 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:46.283 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:46.283 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:46.283 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:46.283 07:51:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:46.283 07:51:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.283 07:51:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.656 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.657 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.657 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:47.657 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.657 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.657 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:47.657 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:47.657 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:47.657 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.657 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.657 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.657 07:51:39 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:47.657 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:47.657 00:04:47.657 real 0m6.251s 00:04:47.657 user 0m1.395s 00:04:47.657 sys 0m2.418s 00:04:47.657 07:51:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.657 07:51:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:47.657 ************************************ 00:04:47.657 END TEST nvme_mount 00:04:47.657 ************************************ 00:04:47.657 07:51:39 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:47.657 07:51:39 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:47.657 07:51:39 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.657 07:51:39 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.657 07:51:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:47.657 ************************************ 00:04:47.657 START TEST dm_mount 00:04:47.657 ************************************ 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:47.657 07:51:39 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:48.590 Creating new GPT entries in memory. 00:04:48.590 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:48.590 other utilities. 00:04:48.590 07:51:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:48.590 07:51:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.590 07:51:40 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:48.590 07:51:40 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:48.590 07:51:40 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:49.963 Creating new GPT entries in memory. 00:04:49.963 The operation has completed successfully. 00:04:49.963 07:51:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:49.963 07:51:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:49.963 07:51:41 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:49.963 07:51:41 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:49.963 07:51:41 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:50.898 The operation has completed successfully. 00:04:50.898 07:51:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:50.898 07:51:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:50.898 07:51:42 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1816492 00:04:50.898 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:50.898 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.898 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:50.898 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:50.898 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:50.898 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:50.898 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:50.898 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:50.898 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:50.898 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:50.898 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:04:50.898 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:50.898 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:50.898 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.899 07:51:42 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:50.899 07:51:42 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.899 07:51:42 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:50.899 07:51:42 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:50.899 07:51:42 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.899 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:50.899 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:50.899 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:50.899 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.899 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:50.899 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:50.899 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:50.899 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:50.899 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:50.899 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.899 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:50.899 07:51:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:50.899 07:51:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.899 07:51:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:51.832 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.090 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:52.090 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:52.090 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:52.090 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:52.090 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:52.090 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:52.090 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:52.090 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:04:52.090 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:52.090 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:52.090 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:52.090 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:52.090 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:52.090 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:52.090 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:52.090 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:04:52.090 07:51:43 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:52.090 07:51:43 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:52.090 07:51:43 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:53.023 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.023 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:53.023 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:53.023 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.023 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.023 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.023 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.023 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.023 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.023 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.023 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.023 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.023 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.023 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.023 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.023 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.023 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.023 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.023 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.024 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.024 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.024 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.024 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.024 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.024 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.024 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.024 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.024 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.024 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.024 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.024 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.024 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.024 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.024 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.024 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:04:53.024 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:53.283 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:53.283 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:53.283 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:53.283 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:53.283 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:53.283 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:53.283 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:53.283 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:53.283 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:53.283 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:53.283 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:53.283 07:51:44 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:53.283 00:04:53.283 real 0m5.653s 00:04:53.283 user 0m0.979s 00:04:53.283 sys 0m1.547s 00:04:53.283 07:51:44 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.283 07:51:44 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:53.283 ************************************ 00:04:53.283 END TEST dm_mount 00:04:53.283 ************************************ 00:04:53.283 07:51:44 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:53.283 07:51:44 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:53.283 07:51:44 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:53.283 07:51:44 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:53.283 07:51:44 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:53.283 07:51:44 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:53.283 07:51:44 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:53.283 07:51:44 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:53.574 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:53.574 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:53.574 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:53.574 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:53.574 07:51:45 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:53.574 07:51:45 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:53.574 07:51:45 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:53.574 07:51:45 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:53.574 07:51:45 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:53.574 07:51:45 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:53.574 07:51:45 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:53.574 00:04:53.574 real 0m13.884s 00:04:53.574 user 0m3.075s 00:04:53.574 sys 0m5.015s 00:04:53.574 07:51:45 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.574 07:51:45 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:53.574 ************************************ 00:04:53.574 END TEST devices 00:04:53.574 ************************************ 00:04:53.574 07:51:45 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:53.574 00:04:53.574 real 0m43.096s 00:04:53.574 user 0m12.345s 00:04:53.574 sys 0m18.968s 00:04:53.574 07:51:45 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.574 07:51:45 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:53.574 ************************************ 00:04:53.574 END TEST setup.sh 00:04:53.574 ************************************ 00:04:53.574 07:51:45 -- common/autotest_common.sh@1142 -- # return 0 00:04:53.574 07:51:45 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:54.947 Hugepages 00:04:54.947 node hugesize free / total 00:04:54.947 node0 1048576kB 0 / 0 00:04:54.947 node0 2048kB 2048 / 2048 00:04:54.947 node1 1048576kB 0 / 0 00:04:54.947 node1 2048kB 0 / 0 00:04:54.947 00:04:54.947 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:54.947 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:04:54.947 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:04:54.947 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:04:54.947 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:04:54.947 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:04:54.947 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:04:54.947 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:04:54.947 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:04:54.947 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:04:54.947 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:04:54.947 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:04:54.947 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:04:54.947 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:04:54.947 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:04:54.947 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:04:54.947 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:04:54.948 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:04:54.948 07:51:46 -- spdk/autotest.sh@130 -- # uname -s 00:04:54.948 07:51:46 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:54.948 07:51:46 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:54.948 07:51:46 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:56.320 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:56.320 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:56.320 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:56.320 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:56.320 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:56.320 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:56.320 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:56.320 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:56.320 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:56.320 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:56.321 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:56.321 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:56.321 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:56.321 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:56.321 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:56.321 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:57.254 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:57.254 07:51:48 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:58.186 07:51:49 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:58.186 07:51:49 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:58.186 07:51:49 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:58.186 07:51:49 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:58.186 07:51:49 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:58.186 07:51:49 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:58.186 07:51:49 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.186 07:51:49 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:58.186 07:51:49 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:58.186 07:51:49 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:58.186 07:51:49 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:04:58.187 07:51:49 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:59.556 Waiting for block devices as requested 00:04:59.556 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:04:59.556 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:04:59.556 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:04:59.813 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:04:59.813 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:04:59.813 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:04:59.813 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:00.070 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:00.070 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:00.070 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:00.070 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:00.327 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:00.327 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:00.327 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:00.327 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:00.583 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:00.583 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:00.583 07:51:52 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:00.583 07:51:52 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:00.583 07:51:52 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:00.583 07:51:52 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:05:00.583 07:51:52 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:00.583 07:51:52 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:00.583 07:51:52 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:00.583 07:51:52 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:00.583 07:51:52 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:00.583 07:51:52 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:00.583 07:51:52 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:00.583 07:51:52 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:00.583 07:51:52 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:00.583 07:51:52 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:00.583 07:51:52 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:00.583 07:51:52 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:00.840 07:51:52 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:00.840 07:51:52 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:00.840 07:51:52 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:00.840 07:51:52 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:00.840 07:51:52 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:00.840 07:51:52 -- common/autotest_common.sh@1557 -- # continue 00:05:00.840 07:51:52 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:00.840 07:51:52 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:00.840 07:51:52 -- common/autotest_common.sh@10 -- # set +x 00:05:00.840 07:51:52 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:00.840 07:51:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:00.840 07:51:52 -- common/autotest_common.sh@10 -- # set +x 00:05:00.840 07:51:52 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:02.212 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:02.212 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:02.212 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:02.212 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:02.212 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:02.212 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:02.212 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:02.212 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:02.212 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:02.212 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:02.212 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:02.212 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:02.212 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:02.212 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:02.212 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:02.212 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:03.144 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:03.144 07:51:54 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:03.144 07:51:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:03.144 07:51:54 -- common/autotest_common.sh@10 -- # set +x 00:05:03.144 07:51:54 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:03.144 07:51:54 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:03.144 07:51:54 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:03.144 07:51:54 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:03.144 07:51:54 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:03.144 07:51:54 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:03.144 07:51:54 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:03.144 07:51:54 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:03.144 07:51:54 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:03.144 07:51:54 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:03.144 07:51:54 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:03.144 07:51:54 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:03.144 07:51:54 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:03.144 07:51:54 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:03.144 07:51:54 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:03.144 07:51:54 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:03.144 07:51:54 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:03.144 07:51:54 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:03.144 07:51:54 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:05:03.144 07:51:54 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:05:03.144 07:51:54 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=1821660 00:05:03.144 07:51:54 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:03.144 07:51:54 -- common/autotest_common.sh@1598 -- # waitforlisten 1821660 00:05:03.144 07:51:54 -- common/autotest_common.sh@829 -- # '[' -z 1821660 ']' 00:05:03.144 07:51:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:03.144 07:51:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:03.144 07:51:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:03.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:03.144 07:51:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:03.144 07:51:54 -- common/autotest_common.sh@10 -- # set +x 00:05:03.144 [2024-07-13 07:51:54.870901] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:03.144 [2024-07-13 07:51:54.871007] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1821660 ] 00:05:03.402 EAL: No free 2048 kB hugepages reported on node 1 00:05:03.402 [2024-07-13 07:51:54.934899] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.402 [2024-07-13 07:51:55.025782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.660 07:51:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:03.660 07:51:55 -- common/autotest_common.sh@862 -- # return 0 00:05:03.660 07:51:55 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:03.660 07:51:55 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:03.660 07:51:55 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:06.940 nvme0n1 00:05:06.940 07:51:58 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:06.940 [2024-07-13 07:51:58.604322] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:06.940 [2024-07-13 07:51:58.604369] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:06.940 request: 00:05:06.940 { 00:05:06.940 "nvme_ctrlr_name": "nvme0", 00:05:06.940 "password": "test", 00:05:06.940 "method": "bdev_nvme_opal_revert", 00:05:06.940 "req_id": 1 00:05:06.940 } 00:05:06.941 Got JSON-RPC error response 00:05:06.941 response: 00:05:06.941 { 00:05:06.941 "code": -32603, 00:05:06.941 "message": "Internal error" 00:05:06.941 } 00:05:06.941 07:51:58 -- common/autotest_common.sh@1604 -- # true 00:05:06.941 07:51:58 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:06.941 07:51:58 -- common/autotest_common.sh@1608 -- # killprocess 1821660 00:05:06.941 07:51:58 -- common/autotest_common.sh@948 -- # '[' -z 1821660 ']' 00:05:06.941 07:51:58 -- common/autotest_common.sh@952 -- # kill -0 1821660 00:05:06.941 07:51:58 -- common/autotest_common.sh@953 -- # uname 00:05:06.941 07:51:58 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:06.941 07:51:58 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1821660 00:05:06.941 07:51:58 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:06.941 07:51:58 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:06.941 07:51:58 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1821660' 00:05:06.941 killing process with pid 1821660 00:05:06.941 07:51:58 -- common/autotest_common.sh@967 -- # kill 1821660 00:05:06.941 07:51:58 -- common/autotest_common.sh@972 -- # wait 1821660 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.199 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:07.200 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:09.125 07:52:00 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:09.125 07:52:00 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:09.125 07:52:00 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:09.125 07:52:00 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:09.125 07:52:00 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:09.125 07:52:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:09.125 07:52:00 -- common/autotest_common.sh@10 -- # set +x 00:05:09.125 07:52:00 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:09.125 07:52:00 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:09.125 07:52:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.125 07:52:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.125 07:52:00 -- common/autotest_common.sh@10 -- # set +x 00:05:09.125 ************************************ 00:05:09.125 START TEST env 00:05:09.125 ************************************ 00:05:09.125 07:52:00 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:09.125 * Looking for test storage... 00:05:09.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:09.125 07:52:00 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:09.125 07:52:00 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.125 07:52:00 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.125 07:52:00 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.125 ************************************ 00:05:09.125 START TEST env_memory 00:05:09.125 ************************************ 00:05:09.125 07:52:00 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:09.125 00:05:09.125 00:05:09.125 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.125 http://cunit.sourceforge.net/ 00:05:09.125 00:05:09.125 00:05:09.125 Suite: memory 00:05:09.125 Test: alloc and free memory map ...[2024-07-13 07:52:00.559466] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:09.125 passed 00:05:09.125 Test: mem map translation ...[2024-07-13 07:52:00.578856] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:09.125 [2024-07-13 07:52:00.578881] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:09.125 [2024-07-13 07:52:00.578937] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:09.125 [2024-07-13 07:52:00.578948] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:09.125 passed 00:05:09.125 Test: mem map registration ...[2024-07-13 07:52:00.619407] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:09.125 [2024-07-13 07:52:00.619426] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:09.125 passed 00:05:09.125 Test: mem map adjacent registrations ...passed 00:05:09.125 00:05:09.125 Run Summary: Type Total Ran Passed Failed Inactive 00:05:09.125 suites 1 1 n/a 0 0 00:05:09.125 tests 4 4 4 0 0 00:05:09.125 asserts 152 152 152 0 n/a 00:05:09.125 00:05:09.125 Elapsed time = 0.139 seconds 00:05:09.125 00:05:09.125 real 0m0.146s 00:05:09.125 user 0m0.141s 00:05:09.125 sys 0m0.005s 00:05:09.125 07:52:00 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.125 07:52:00 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:09.125 ************************************ 00:05:09.125 END TEST env_memory 00:05:09.125 ************************************ 00:05:09.125 07:52:00 env -- common/autotest_common.sh@1142 -- # return 0 00:05:09.125 07:52:00 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:09.125 07:52:00 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:09.125 07:52:00 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.125 07:52:00 env -- common/autotest_common.sh@10 -- # set +x 00:05:09.125 ************************************ 00:05:09.125 START TEST env_vtophys 00:05:09.125 ************************************ 00:05:09.125 07:52:00 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:09.125 EAL: lib.eal log level changed from notice to debug 00:05:09.125 EAL: Detected lcore 0 as core 0 on socket 0 00:05:09.125 EAL: Detected lcore 1 as core 1 on socket 0 00:05:09.125 EAL: Detected lcore 2 as core 2 on socket 0 00:05:09.125 EAL: Detected lcore 3 as core 3 on socket 0 00:05:09.125 EAL: Detected lcore 4 as core 4 on socket 0 00:05:09.125 EAL: Detected lcore 5 as core 5 on socket 0 00:05:09.125 EAL: Detected lcore 6 as core 8 on socket 0 00:05:09.125 EAL: Detected lcore 7 as core 9 on socket 0 00:05:09.125 EAL: Detected lcore 8 as core 10 on socket 0 00:05:09.125 EAL: Detected lcore 9 as core 11 on socket 0 00:05:09.125 EAL: Detected lcore 10 as core 12 on socket 0 00:05:09.125 EAL: Detected lcore 11 as core 13 on socket 0 00:05:09.125 EAL: Detected lcore 12 as core 0 on socket 1 00:05:09.125 EAL: Detected lcore 13 as core 1 on socket 1 00:05:09.125 EAL: Detected lcore 14 as core 2 on socket 1 00:05:09.125 EAL: Detected lcore 15 as core 3 on socket 1 00:05:09.125 EAL: Detected lcore 16 as core 4 on socket 1 00:05:09.125 EAL: Detected lcore 17 as core 5 on socket 1 00:05:09.125 EAL: Detected lcore 18 as core 8 on socket 1 00:05:09.125 EAL: Detected lcore 19 as core 9 on socket 1 00:05:09.125 EAL: Detected lcore 20 as core 10 on socket 1 00:05:09.125 EAL: Detected lcore 21 as core 11 on socket 1 00:05:09.125 EAL: Detected lcore 22 as core 12 on socket 1 00:05:09.125 EAL: Detected lcore 23 as core 13 on socket 1 00:05:09.125 EAL: Detected lcore 24 as core 0 on socket 0 00:05:09.125 EAL: Detected lcore 25 as core 1 on socket 0 00:05:09.125 EAL: Detected lcore 26 as core 2 on socket 0 00:05:09.125 EAL: Detected lcore 27 as core 3 on socket 0 00:05:09.125 EAL: Detected lcore 28 as core 4 on socket 0 00:05:09.125 EAL: Detected lcore 29 as core 5 on socket 0 00:05:09.125 EAL: Detected lcore 30 as core 8 on socket 0 00:05:09.125 EAL: Detected lcore 31 as core 9 on socket 0 00:05:09.125 EAL: Detected lcore 32 as core 10 on socket 0 00:05:09.125 EAL: Detected lcore 33 as core 11 on socket 0 00:05:09.125 EAL: Detected lcore 34 as core 12 on socket 0 00:05:09.125 EAL: Detected lcore 35 as core 13 on socket 0 00:05:09.125 EAL: Detected lcore 36 as core 0 on socket 1 00:05:09.125 EAL: Detected lcore 37 as core 1 on socket 1 00:05:09.125 EAL: Detected lcore 38 as core 2 on socket 1 00:05:09.125 EAL: Detected lcore 39 as core 3 on socket 1 00:05:09.125 EAL: Detected lcore 40 as core 4 on socket 1 00:05:09.125 EAL: Detected lcore 41 as core 5 on socket 1 00:05:09.125 EAL: Detected lcore 42 as core 8 on socket 1 00:05:09.125 EAL: Detected lcore 43 as core 9 on socket 1 00:05:09.125 EAL: Detected lcore 44 as core 10 on socket 1 00:05:09.125 EAL: Detected lcore 45 as core 11 on socket 1 00:05:09.125 EAL: Detected lcore 46 as core 12 on socket 1 00:05:09.125 EAL: Detected lcore 47 as core 13 on socket 1 00:05:09.125 EAL: Maximum logical cores by configuration: 128 00:05:09.125 EAL: Detected CPU lcores: 48 00:05:09.125 EAL: Detected NUMA nodes: 2 00:05:09.125 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:09.125 EAL: Detected shared linkage of DPDK 00:05:09.125 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:09.125 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:09.125 EAL: Registered [vdev] bus. 00:05:09.125 EAL: bus.vdev log level changed from disabled to notice 00:05:09.125 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:09.125 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:09.125 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:09.126 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:09.126 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:09.126 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:09.126 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:09.126 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:09.126 EAL: No shared files mode enabled, IPC will be disabled 00:05:09.126 EAL: No shared files mode enabled, IPC is disabled 00:05:09.126 EAL: Bus pci wants IOVA as 'DC' 00:05:09.126 EAL: Bus vdev wants IOVA as 'DC' 00:05:09.126 EAL: Buses did not request a specific IOVA mode. 00:05:09.126 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:09.126 EAL: Selected IOVA mode 'VA' 00:05:09.126 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.126 EAL: Probing VFIO support... 00:05:09.126 EAL: IOMMU type 1 (Type 1) is supported 00:05:09.126 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:09.126 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:09.126 EAL: VFIO support initialized 00:05:09.126 EAL: Ask a virtual area of 0x2e000 bytes 00:05:09.126 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:09.126 EAL: Setting up physically contiguous memory... 00:05:09.126 EAL: Setting maximum number of open files to 524288 00:05:09.126 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:09.126 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:09.126 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:09.126 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.126 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:09.126 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.126 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.126 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:09.126 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:09.126 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.126 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:09.126 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.126 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.126 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:09.126 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:09.126 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.126 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:09.126 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.126 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.126 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:09.126 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:09.126 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.126 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:09.126 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:09.126 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.126 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:09.126 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:09.126 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:09.126 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.126 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:09.126 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:09.126 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.126 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:09.126 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:09.126 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.126 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:09.126 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:09.126 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.126 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:09.126 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:09.126 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.126 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:09.126 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:09.126 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.126 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:09.126 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:09.126 EAL: Ask a virtual area of 0x61000 bytes 00:05:09.126 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:09.126 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:09.126 EAL: Ask a virtual area of 0x400000000 bytes 00:05:09.126 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:09.126 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:09.126 EAL: Hugepages will be freed exactly as allocated. 00:05:09.126 EAL: No shared files mode enabled, IPC is disabled 00:05:09.126 EAL: No shared files mode enabled, IPC is disabled 00:05:09.126 EAL: TSC frequency is ~2700000 KHz 00:05:09.126 EAL: Main lcore 0 is ready (tid=7f9905e4fa00;cpuset=[0]) 00:05:09.126 EAL: Trying to obtain current memory policy. 00:05:09.126 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.126 EAL: Restoring previous memory policy: 0 00:05:09.126 EAL: request: mp_malloc_sync 00:05:09.126 EAL: No shared files mode enabled, IPC is disabled 00:05:09.126 EAL: Heap on socket 0 was expanded by 2MB 00:05:09.126 EAL: No shared files mode enabled, IPC is disabled 00:05:09.126 EAL: No shared files mode enabled, IPC is disabled 00:05:09.126 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:09.126 EAL: Mem event callback 'spdk:(nil)' registered 00:05:09.126 00:05:09.126 00:05:09.126 CUnit - A unit testing framework for C - Version 2.1-3 00:05:09.126 http://cunit.sourceforge.net/ 00:05:09.126 00:05:09.126 00:05:09.126 Suite: components_suite 00:05:09.126 Test: vtophys_malloc_test ...passed 00:05:09.126 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:09.126 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.126 EAL: Restoring previous memory policy: 4 00:05:09.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.126 EAL: request: mp_malloc_sync 00:05:09.126 EAL: No shared files mode enabled, IPC is disabled 00:05:09.126 EAL: Heap on socket 0 was expanded by 4MB 00:05:09.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.126 EAL: request: mp_malloc_sync 00:05:09.126 EAL: No shared files mode enabled, IPC is disabled 00:05:09.126 EAL: Heap on socket 0 was shrunk by 4MB 00:05:09.126 EAL: Trying to obtain current memory policy. 00:05:09.126 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.126 EAL: Restoring previous memory policy: 4 00:05:09.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.126 EAL: request: mp_malloc_sync 00:05:09.126 EAL: No shared files mode enabled, IPC is disabled 00:05:09.126 EAL: Heap on socket 0 was expanded by 6MB 00:05:09.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.126 EAL: request: mp_malloc_sync 00:05:09.126 EAL: No shared files mode enabled, IPC is disabled 00:05:09.126 EAL: Heap on socket 0 was shrunk by 6MB 00:05:09.126 EAL: Trying to obtain current memory policy. 00:05:09.126 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.126 EAL: Restoring previous memory policy: 4 00:05:09.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.126 EAL: request: mp_malloc_sync 00:05:09.126 EAL: No shared files mode enabled, IPC is disabled 00:05:09.126 EAL: Heap on socket 0 was expanded by 10MB 00:05:09.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.126 EAL: request: mp_malloc_sync 00:05:09.126 EAL: No shared files mode enabled, IPC is disabled 00:05:09.126 EAL: Heap on socket 0 was shrunk by 10MB 00:05:09.126 EAL: Trying to obtain current memory policy. 00:05:09.126 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.126 EAL: Restoring previous memory policy: 4 00:05:09.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.126 EAL: request: mp_malloc_sync 00:05:09.126 EAL: No shared files mode enabled, IPC is disabled 00:05:09.126 EAL: Heap on socket 0 was expanded by 18MB 00:05:09.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.126 EAL: request: mp_malloc_sync 00:05:09.126 EAL: No shared files mode enabled, IPC is disabled 00:05:09.126 EAL: Heap on socket 0 was shrunk by 18MB 00:05:09.126 EAL: Trying to obtain current memory policy. 00:05:09.126 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.126 EAL: Restoring previous memory policy: 4 00:05:09.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.126 EAL: request: mp_malloc_sync 00:05:09.126 EAL: No shared files mode enabled, IPC is disabled 00:05:09.126 EAL: Heap on socket 0 was expanded by 34MB 00:05:09.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.126 EAL: request: mp_malloc_sync 00:05:09.126 EAL: No shared files mode enabled, IPC is disabled 00:05:09.126 EAL: Heap on socket 0 was shrunk by 34MB 00:05:09.126 EAL: Trying to obtain current memory policy. 00:05:09.126 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.126 EAL: Restoring previous memory policy: 4 00:05:09.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.126 EAL: request: mp_malloc_sync 00:05:09.126 EAL: No shared files mode enabled, IPC is disabled 00:05:09.126 EAL: Heap on socket 0 was expanded by 66MB 00:05:09.126 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.385 EAL: request: mp_malloc_sync 00:05:09.385 EAL: No shared files mode enabled, IPC is disabled 00:05:09.385 EAL: Heap on socket 0 was shrunk by 66MB 00:05:09.385 EAL: Trying to obtain current memory policy. 00:05:09.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.385 EAL: Restoring previous memory policy: 4 00:05:09.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.385 EAL: request: mp_malloc_sync 00:05:09.385 EAL: No shared files mode enabled, IPC is disabled 00:05:09.385 EAL: Heap on socket 0 was expanded by 130MB 00:05:09.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.385 EAL: request: mp_malloc_sync 00:05:09.385 EAL: No shared files mode enabled, IPC is disabled 00:05:09.385 EAL: Heap on socket 0 was shrunk by 130MB 00:05:09.385 EAL: Trying to obtain current memory policy. 00:05:09.385 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.385 EAL: Restoring previous memory policy: 4 00:05:09.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.385 EAL: request: mp_malloc_sync 00:05:09.385 EAL: No shared files mode enabled, IPC is disabled 00:05:09.385 EAL: Heap on socket 0 was expanded by 258MB 00:05:09.385 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.642 EAL: request: mp_malloc_sync 00:05:09.642 EAL: No shared files mode enabled, IPC is disabled 00:05:09.642 EAL: Heap on socket 0 was shrunk by 258MB 00:05:09.642 EAL: Trying to obtain current memory policy. 00:05:09.642 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.642 EAL: Restoring previous memory policy: 4 00:05:09.642 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.642 EAL: request: mp_malloc_sync 00:05:09.642 EAL: No shared files mode enabled, IPC is disabled 00:05:09.642 EAL: Heap on socket 0 was expanded by 514MB 00:05:09.642 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.900 EAL: request: mp_malloc_sync 00:05:09.900 EAL: No shared files mode enabled, IPC is disabled 00:05:09.900 EAL: Heap on socket 0 was shrunk by 514MB 00:05:09.900 EAL: Trying to obtain current memory policy. 00:05:09.900 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:10.157 EAL: Restoring previous memory policy: 4 00:05:10.157 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.157 EAL: request: mp_malloc_sync 00:05:10.157 EAL: No shared files mode enabled, IPC is disabled 00:05:10.158 EAL: Heap on socket 0 was expanded by 1026MB 00:05:10.415 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.672 EAL: request: mp_malloc_sync 00:05:10.672 EAL: No shared files mode enabled, IPC is disabled 00:05:10.672 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:10.672 passed 00:05:10.672 00:05:10.672 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.672 suites 1 1 n/a 0 0 00:05:10.672 tests 2 2 2 0 0 00:05:10.672 asserts 497 497 497 0 n/a 00:05:10.672 00:05:10.672 Elapsed time = 1.369 seconds 00:05:10.672 EAL: Calling mem event callback 'spdk:(nil)' 00:05:10.672 EAL: request: mp_malloc_sync 00:05:10.672 EAL: No shared files mode enabled, IPC is disabled 00:05:10.672 EAL: Heap on socket 0 was shrunk by 2MB 00:05:10.672 EAL: No shared files mode enabled, IPC is disabled 00:05:10.672 EAL: No shared files mode enabled, IPC is disabled 00:05:10.672 EAL: No shared files mode enabled, IPC is disabled 00:05:10.672 00:05:10.672 real 0m1.482s 00:05:10.672 user 0m0.848s 00:05:10.672 sys 0m0.603s 00:05:10.672 07:52:02 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.672 07:52:02 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:10.672 ************************************ 00:05:10.672 END TEST env_vtophys 00:05:10.672 ************************************ 00:05:10.672 07:52:02 env -- common/autotest_common.sh@1142 -- # return 0 00:05:10.672 07:52:02 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:10.672 07:52:02 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:10.672 07:52:02 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.672 07:52:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.672 ************************************ 00:05:10.672 START TEST env_pci 00:05:10.672 ************************************ 00:05:10.672 07:52:02 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:10.672 00:05:10.672 00:05:10.672 CUnit - A unit testing framework for C - Version 2.1-3 00:05:10.672 http://cunit.sourceforge.net/ 00:05:10.672 00:05:10.672 00:05:10.672 Suite: pci 00:05:10.672 Test: pci_hook ...[2024-07-13 07:52:02.257720] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1822552 has claimed it 00:05:10.672 EAL: Cannot find device (10000:00:01.0) 00:05:10.672 EAL: Failed to attach device on primary process 00:05:10.672 passed 00:05:10.672 00:05:10.672 Run Summary: Type Total Ran Passed Failed Inactive 00:05:10.672 suites 1 1 n/a 0 0 00:05:10.672 tests 1 1 1 0 0 00:05:10.672 asserts 25 25 25 0 n/a 00:05:10.672 00:05:10.672 Elapsed time = 0.021 seconds 00:05:10.672 00:05:10.672 real 0m0.034s 00:05:10.672 user 0m0.008s 00:05:10.672 sys 0m0.025s 00:05:10.672 07:52:02 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.672 07:52:02 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:10.672 ************************************ 00:05:10.672 END TEST env_pci 00:05:10.672 ************************************ 00:05:10.672 07:52:02 env -- common/autotest_common.sh@1142 -- # return 0 00:05:10.672 07:52:02 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:10.672 07:52:02 env -- env/env.sh@15 -- # uname 00:05:10.672 07:52:02 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:10.672 07:52:02 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:10.672 07:52:02 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:10.672 07:52:02 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:10.672 07:52:02 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.672 07:52:02 env -- common/autotest_common.sh@10 -- # set +x 00:05:10.672 ************************************ 00:05:10.672 START TEST env_dpdk_post_init 00:05:10.672 ************************************ 00:05:10.672 07:52:02 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:10.672 EAL: Detected CPU lcores: 48 00:05:10.672 EAL: Detected NUMA nodes: 2 00:05:10.672 EAL: Detected shared linkage of DPDK 00:05:10.672 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:10.672 EAL: Selected IOVA mode 'VA' 00:05:10.672 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.672 EAL: VFIO support initialized 00:05:10.672 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:10.930 EAL: Using IOMMU type 1 (Type 1) 00:05:10.930 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:10.930 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:10.930 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:10.930 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:10.930 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:10.930 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:10.930 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:10.930 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:10.930 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:10.930 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:10.930 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:10.930 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:10.930 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:10.930 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:10.930 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:10.930 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:11.862 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:15.139 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:15.139 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:15.139 Starting DPDK initialization... 00:05:15.139 Starting SPDK post initialization... 00:05:15.139 SPDK NVMe probe 00:05:15.139 Attaching to 0000:88:00.0 00:05:15.139 Attached to 0000:88:00.0 00:05:15.139 Cleaning up... 00:05:15.139 00:05:15.139 real 0m4.376s 00:05:15.139 user 0m3.250s 00:05:15.139 sys 0m0.188s 00:05:15.139 07:52:06 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.139 07:52:06 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:15.139 ************************************ 00:05:15.139 END TEST env_dpdk_post_init 00:05:15.139 ************************************ 00:05:15.139 07:52:06 env -- common/autotest_common.sh@1142 -- # return 0 00:05:15.139 07:52:06 env -- env/env.sh@26 -- # uname 00:05:15.139 07:52:06 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:15.139 07:52:06 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.139 07:52:06 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.139 07:52:06 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.139 07:52:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.139 ************************************ 00:05:15.139 START TEST env_mem_callbacks 00:05:15.139 ************************************ 00:05:15.139 07:52:06 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:15.139 EAL: Detected CPU lcores: 48 00:05:15.139 EAL: Detected NUMA nodes: 2 00:05:15.139 EAL: Detected shared linkage of DPDK 00:05:15.139 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:15.139 EAL: Selected IOVA mode 'VA' 00:05:15.139 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.139 EAL: VFIO support initialized 00:05:15.139 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:15.139 00:05:15.139 00:05:15.139 CUnit - A unit testing framework for C - Version 2.1-3 00:05:15.139 http://cunit.sourceforge.net/ 00:05:15.139 00:05:15.139 00:05:15.139 Suite: memory 00:05:15.139 Test: test ... 00:05:15.139 register 0x200000200000 2097152 00:05:15.139 malloc 3145728 00:05:15.139 register 0x200000400000 4194304 00:05:15.139 buf 0x200000500000 len 3145728 PASSED 00:05:15.139 malloc 64 00:05:15.139 buf 0x2000004fff40 len 64 PASSED 00:05:15.139 malloc 4194304 00:05:15.139 register 0x200000800000 6291456 00:05:15.139 buf 0x200000a00000 len 4194304 PASSED 00:05:15.139 free 0x200000500000 3145728 00:05:15.139 free 0x2000004fff40 64 00:05:15.139 unregister 0x200000400000 4194304 PASSED 00:05:15.139 free 0x200000a00000 4194304 00:05:15.139 unregister 0x200000800000 6291456 PASSED 00:05:15.139 malloc 8388608 00:05:15.139 register 0x200000400000 10485760 00:05:15.139 buf 0x200000600000 len 8388608 PASSED 00:05:15.139 free 0x200000600000 8388608 00:05:15.139 unregister 0x200000400000 10485760 PASSED 00:05:15.139 passed 00:05:15.139 00:05:15.139 Run Summary: Type Total Ran Passed Failed Inactive 00:05:15.139 suites 1 1 n/a 0 0 00:05:15.139 tests 1 1 1 0 0 00:05:15.139 asserts 15 15 15 0 n/a 00:05:15.139 00:05:15.139 Elapsed time = 0.005 seconds 00:05:15.139 00:05:15.139 real 0m0.048s 00:05:15.139 user 0m0.011s 00:05:15.139 sys 0m0.037s 00:05:15.140 07:52:06 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.140 07:52:06 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:15.140 ************************************ 00:05:15.140 END TEST env_mem_callbacks 00:05:15.140 ************************************ 00:05:15.140 07:52:06 env -- common/autotest_common.sh@1142 -- # return 0 00:05:15.140 00:05:15.140 real 0m6.363s 00:05:15.140 user 0m4.383s 00:05:15.140 sys 0m1.026s 00:05:15.140 07:52:06 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.140 07:52:06 env -- common/autotest_common.sh@10 -- # set +x 00:05:15.140 ************************************ 00:05:15.140 END TEST env 00:05:15.140 ************************************ 00:05:15.140 07:52:06 -- common/autotest_common.sh@1142 -- # return 0 00:05:15.140 07:52:06 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:15.140 07:52:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.140 07:52:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.140 07:52:06 -- common/autotest_common.sh@10 -- # set +x 00:05:15.140 ************************************ 00:05:15.140 START TEST rpc 00:05:15.140 ************************************ 00:05:15.140 07:52:06 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:15.398 * Looking for test storage... 00:05:15.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:15.398 07:52:06 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1823209 00:05:15.398 07:52:06 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:15.398 07:52:06 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:15.398 07:52:06 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1823209 00:05:15.398 07:52:06 rpc -- common/autotest_common.sh@829 -- # '[' -z 1823209 ']' 00:05:15.398 07:52:06 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.398 07:52:06 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.398 07:52:06 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.398 07:52:06 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.398 07:52:06 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.398 [2024-07-13 07:52:06.986264] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:15.398 [2024-07-13 07:52:06.986388] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1823209 ] 00:05:15.398 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.398 [2024-07-13 07:52:07.061171] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.656 [2024-07-13 07:52:07.148426] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:15.656 [2024-07-13 07:52:07.148497] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1823209' to capture a snapshot of events at runtime. 00:05:15.656 [2024-07-13 07:52:07.148510] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:15.656 [2024-07-13 07:52:07.148521] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:15.656 [2024-07-13 07:52:07.148531] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1823209 for offline analysis/debug. 00:05:15.656 [2024-07-13 07:52:07.148558] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.914 07:52:07 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:15.914 07:52:07 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:15.914 07:52:07 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:15.914 07:52:07 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:15.914 07:52:07 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:15.914 07:52:07 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:15.914 07:52:07 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.914 07:52:07 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.914 07:52:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.914 ************************************ 00:05:15.914 START TEST rpc_integrity 00:05:15.914 ************************************ 00:05:15.914 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:15.914 07:52:07 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:15.914 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.914 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.914 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.914 07:52:07 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:15.914 07:52:07 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:15.914 07:52:07 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:15.914 07:52:07 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:15.915 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.915 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.915 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.915 07:52:07 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:15.915 07:52:07 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:15.915 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.915 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.915 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.915 07:52:07 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:15.915 { 00:05:15.915 "name": "Malloc0", 00:05:15.915 "aliases": [ 00:05:15.915 "8d5d7ab0-6aeb-4ee0-9d19-e5a57c312331" 00:05:15.915 ], 00:05:15.915 "product_name": "Malloc disk", 00:05:15.915 "block_size": 512, 00:05:15.915 "num_blocks": 16384, 00:05:15.915 "uuid": "8d5d7ab0-6aeb-4ee0-9d19-e5a57c312331", 00:05:15.915 "assigned_rate_limits": { 00:05:15.915 "rw_ios_per_sec": 0, 00:05:15.915 "rw_mbytes_per_sec": 0, 00:05:15.915 "r_mbytes_per_sec": 0, 00:05:15.915 "w_mbytes_per_sec": 0 00:05:15.915 }, 00:05:15.915 "claimed": false, 00:05:15.915 "zoned": false, 00:05:15.915 "supported_io_types": { 00:05:15.915 "read": true, 00:05:15.915 "write": true, 00:05:15.915 "unmap": true, 00:05:15.915 "flush": true, 00:05:15.915 "reset": true, 00:05:15.915 "nvme_admin": false, 00:05:15.915 "nvme_io": false, 00:05:15.915 "nvme_io_md": false, 00:05:15.915 "write_zeroes": true, 00:05:15.915 "zcopy": true, 00:05:15.915 "get_zone_info": false, 00:05:15.915 "zone_management": false, 00:05:15.915 "zone_append": false, 00:05:15.915 "compare": false, 00:05:15.915 "compare_and_write": false, 00:05:15.915 "abort": true, 00:05:15.915 "seek_hole": false, 00:05:15.915 "seek_data": false, 00:05:15.915 "copy": true, 00:05:15.915 "nvme_iov_md": false 00:05:15.915 }, 00:05:15.915 "memory_domains": [ 00:05:15.915 { 00:05:15.915 "dma_device_id": "system", 00:05:15.915 "dma_device_type": 1 00:05:15.915 }, 00:05:15.915 { 00:05:15.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.915 "dma_device_type": 2 00:05:15.915 } 00:05:15.915 ], 00:05:15.915 "driver_specific": {} 00:05:15.915 } 00:05:15.915 ]' 00:05:15.915 07:52:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:15.915 07:52:07 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:15.915 07:52:07 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:15.915 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.915 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.915 [2024-07-13 07:52:07.535273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:15.915 [2024-07-13 07:52:07.535317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:15.915 [2024-07-13 07:52:07.535348] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2006af0 00:05:15.915 [2024-07-13 07:52:07.535364] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:15.915 [2024-07-13 07:52:07.536849] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:15.915 [2024-07-13 07:52:07.536884] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:15.915 Passthru0 00:05:15.915 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.915 07:52:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:15.915 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.915 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.915 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.915 07:52:07 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:15.915 { 00:05:15.915 "name": "Malloc0", 00:05:15.915 "aliases": [ 00:05:15.915 "8d5d7ab0-6aeb-4ee0-9d19-e5a57c312331" 00:05:15.915 ], 00:05:15.915 "product_name": "Malloc disk", 00:05:15.915 "block_size": 512, 00:05:15.915 "num_blocks": 16384, 00:05:15.915 "uuid": "8d5d7ab0-6aeb-4ee0-9d19-e5a57c312331", 00:05:15.915 "assigned_rate_limits": { 00:05:15.915 "rw_ios_per_sec": 0, 00:05:15.915 "rw_mbytes_per_sec": 0, 00:05:15.915 "r_mbytes_per_sec": 0, 00:05:15.915 "w_mbytes_per_sec": 0 00:05:15.915 }, 00:05:15.915 "claimed": true, 00:05:15.915 "claim_type": "exclusive_write", 00:05:15.915 "zoned": false, 00:05:15.915 "supported_io_types": { 00:05:15.915 "read": true, 00:05:15.915 "write": true, 00:05:15.915 "unmap": true, 00:05:15.915 "flush": true, 00:05:15.915 "reset": true, 00:05:15.915 "nvme_admin": false, 00:05:15.915 "nvme_io": false, 00:05:15.915 "nvme_io_md": false, 00:05:15.915 "write_zeroes": true, 00:05:15.915 "zcopy": true, 00:05:15.915 "get_zone_info": false, 00:05:15.915 "zone_management": false, 00:05:15.915 "zone_append": false, 00:05:15.915 "compare": false, 00:05:15.915 "compare_and_write": false, 00:05:15.915 "abort": true, 00:05:15.915 "seek_hole": false, 00:05:15.915 "seek_data": false, 00:05:15.915 "copy": true, 00:05:15.915 "nvme_iov_md": false 00:05:15.915 }, 00:05:15.915 "memory_domains": [ 00:05:15.915 { 00:05:15.915 "dma_device_id": "system", 00:05:15.915 "dma_device_type": 1 00:05:15.915 }, 00:05:15.915 { 00:05:15.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.915 "dma_device_type": 2 00:05:15.915 } 00:05:15.915 ], 00:05:15.915 "driver_specific": {} 00:05:15.915 }, 00:05:15.915 { 00:05:15.915 "name": "Passthru0", 00:05:15.915 "aliases": [ 00:05:15.915 "f6db9a9e-f762-5063-bb37-5ddce0278f11" 00:05:15.915 ], 00:05:15.915 "product_name": "passthru", 00:05:15.915 "block_size": 512, 00:05:15.915 "num_blocks": 16384, 00:05:15.915 "uuid": "f6db9a9e-f762-5063-bb37-5ddce0278f11", 00:05:15.915 "assigned_rate_limits": { 00:05:15.915 "rw_ios_per_sec": 0, 00:05:15.915 "rw_mbytes_per_sec": 0, 00:05:15.915 "r_mbytes_per_sec": 0, 00:05:15.915 "w_mbytes_per_sec": 0 00:05:15.915 }, 00:05:15.915 "claimed": false, 00:05:15.915 "zoned": false, 00:05:15.915 "supported_io_types": { 00:05:15.915 "read": true, 00:05:15.915 "write": true, 00:05:15.915 "unmap": true, 00:05:15.915 "flush": true, 00:05:15.915 "reset": true, 00:05:15.915 "nvme_admin": false, 00:05:15.915 "nvme_io": false, 00:05:15.915 "nvme_io_md": false, 00:05:15.915 "write_zeroes": true, 00:05:15.915 "zcopy": true, 00:05:15.915 "get_zone_info": false, 00:05:15.915 "zone_management": false, 00:05:15.915 "zone_append": false, 00:05:15.915 "compare": false, 00:05:15.915 "compare_and_write": false, 00:05:15.915 "abort": true, 00:05:15.915 "seek_hole": false, 00:05:15.915 "seek_data": false, 00:05:15.915 "copy": true, 00:05:15.915 "nvme_iov_md": false 00:05:15.915 }, 00:05:15.915 "memory_domains": [ 00:05:15.915 { 00:05:15.915 "dma_device_id": "system", 00:05:15.915 "dma_device_type": 1 00:05:15.915 }, 00:05:15.915 { 00:05:15.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.915 "dma_device_type": 2 00:05:15.915 } 00:05:15.915 ], 00:05:15.915 "driver_specific": { 00:05:15.915 "passthru": { 00:05:15.915 "name": "Passthru0", 00:05:15.915 "base_bdev_name": "Malloc0" 00:05:15.915 } 00:05:15.915 } 00:05:15.915 } 00:05:15.915 ]' 00:05:15.915 07:52:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:15.915 07:52:07 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:15.915 07:52:07 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:15.915 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.915 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.915 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.915 07:52:07 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:15.915 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.915 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.915 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.915 07:52:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:15.915 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:15.915 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.915 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:15.915 07:52:07 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:15.915 07:52:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:16.173 07:52:07 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:16.173 00:05:16.173 real 0m0.228s 00:05:16.173 user 0m0.146s 00:05:16.173 sys 0m0.027s 00:05:16.173 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.173 07:52:07 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.173 ************************************ 00:05:16.173 END TEST rpc_integrity 00:05:16.173 ************************************ 00:05:16.173 07:52:07 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:16.173 07:52:07 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:16.173 07:52:07 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.173 07:52:07 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.173 07:52:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.173 ************************************ 00:05:16.173 START TEST rpc_plugins 00:05:16.173 ************************************ 00:05:16.173 07:52:07 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:16.173 07:52:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:16.173 07:52:07 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.173 07:52:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.173 07:52:07 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.173 07:52:07 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:16.173 07:52:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:16.173 07:52:07 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.173 07:52:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.173 07:52:07 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.173 07:52:07 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:16.173 { 00:05:16.174 "name": "Malloc1", 00:05:16.174 "aliases": [ 00:05:16.174 "5a60a9ca-6f74-490a-9ff2-4be62472c705" 00:05:16.174 ], 00:05:16.174 "product_name": "Malloc disk", 00:05:16.174 "block_size": 4096, 00:05:16.174 "num_blocks": 256, 00:05:16.174 "uuid": "5a60a9ca-6f74-490a-9ff2-4be62472c705", 00:05:16.174 "assigned_rate_limits": { 00:05:16.174 "rw_ios_per_sec": 0, 00:05:16.174 "rw_mbytes_per_sec": 0, 00:05:16.174 "r_mbytes_per_sec": 0, 00:05:16.174 "w_mbytes_per_sec": 0 00:05:16.174 }, 00:05:16.174 "claimed": false, 00:05:16.174 "zoned": false, 00:05:16.174 "supported_io_types": { 00:05:16.174 "read": true, 00:05:16.174 "write": true, 00:05:16.174 "unmap": true, 00:05:16.174 "flush": true, 00:05:16.174 "reset": true, 00:05:16.174 "nvme_admin": false, 00:05:16.174 "nvme_io": false, 00:05:16.174 "nvme_io_md": false, 00:05:16.174 "write_zeroes": true, 00:05:16.174 "zcopy": true, 00:05:16.174 "get_zone_info": false, 00:05:16.174 "zone_management": false, 00:05:16.174 "zone_append": false, 00:05:16.174 "compare": false, 00:05:16.174 "compare_and_write": false, 00:05:16.174 "abort": true, 00:05:16.174 "seek_hole": false, 00:05:16.174 "seek_data": false, 00:05:16.174 "copy": true, 00:05:16.174 "nvme_iov_md": false 00:05:16.174 }, 00:05:16.174 "memory_domains": [ 00:05:16.174 { 00:05:16.174 "dma_device_id": "system", 00:05:16.174 "dma_device_type": 1 00:05:16.174 }, 00:05:16.174 { 00:05:16.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.174 "dma_device_type": 2 00:05:16.174 } 00:05:16.174 ], 00:05:16.174 "driver_specific": {} 00:05:16.174 } 00:05:16.174 ]' 00:05:16.174 07:52:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:16.174 07:52:07 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:16.174 07:52:07 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:16.174 07:52:07 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.174 07:52:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.174 07:52:07 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.174 07:52:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:16.174 07:52:07 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.174 07:52:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.174 07:52:07 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.174 07:52:07 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:16.174 07:52:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:16.174 07:52:07 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:16.174 00:05:16.174 real 0m0.114s 00:05:16.174 user 0m0.074s 00:05:16.174 sys 0m0.010s 00:05:16.174 07:52:07 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.174 07:52:07 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:16.174 ************************************ 00:05:16.174 END TEST rpc_plugins 00:05:16.174 ************************************ 00:05:16.174 07:52:07 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:16.174 07:52:07 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:16.174 07:52:07 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.174 07:52:07 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.174 07:52:07 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.174 ************************************ 00:05:16.174 START TEST rpc_trace_cmd_test 00:05:16.174 ************************************ 00:05:16.174 07:52:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:16.174 07:52:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:16.174 07:52:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:16.174 07:52:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.174 07:52:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:16.174 07:52:07 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.174 07:52:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:16.174 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1823209", 00:05:16.174 "tpoint_group_mask": "0x8", 00:05:16.174 "iscsi_conn": { 00:05:16.174 "mask": "0x2", 00:05:16.174 "tpoint_mask": "0x0" 00:05:16.174 }, 00:05:16.174 "scsi": { 00:05:16.174 "mask": "0x4", 00:05:16.174 "tpoint_mask": "0x0" 00:05:16.174 }, 00:05:16.174 "bdev": { 00:05:16.174 "mask": "0x8", 00:05:16.174 "tpoint_mask": "0xffffffffffffffff" 00:05:16.174 }, 00:05:16.174 "nvmf_rdma": { 00:05:16.174 "mask": "0x10", 00:05:16.174 "tpoint_mask": "0x0" 00:05:16.174 }, 00:05:16.174 "nvmf_tcp": { 00:05:16.174 "mask": "0x20", 00:05:16.174 "tpoint_mask": "0x0" 00:05:16.174 }, 00:05:16.174 "ftl": { 00:05:16.174 "mask": "0x40", 00:05:16.174 "tpoint_mask": "0x0" 00:05:16.174 }, 00:05:16.174 "blobfs": { 00:05:16.174 "mask": "0x80", 00:05:16.174 "tpoint_mask": "0x0" 00:05:16.174 }, 00:05:16.174 "dsa": { 00:05:16.174 "mask": "0x200", 00:05:16.174 "tpoint_mask": "0x0" 00:05:16.174 }, 00:05:16.174 "thread": { 00:05:16.174 "mask": "0x400", 00:05:16.174 "tpoint_mask": "0x0" 00:05:16.174 }, 00:05:16.174 "nvme_pcie": { 00:05:16.174 "mask": "0x800", 00:05:16.174 "tpoint_mask": "0x0" 00:05:16.174 }, 00:05:16.174 "iaa": { 00:05:16.174 "mask": "0x1000", 00:05:16.174 "tpoint_mask": "0x0" 00:05:16.174 }, 00:05:16.174 "nvme_tcp": { 00:05:16.174 "mask": "0x2000", 00:05:16.174 "tpoint_mask": "0x0" 00:05:16.174 }, 00:05:16.174 "bdev_nvme": { 00:05:16.174 "mask": "0x4000", 00:05:16.174 "tpoint_mask": "0x0" 00:05:16.174 }, 00:05:16.174 "sock": { 00:05:16.174 "mask": "0x8000", 00:05:16.174 "tpoint_mask": "0x0" 00:05:16.174 } 00:05:16.174 }' 00:05:16.174 07:52:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:16.431 07:52:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:16.432 07:52:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:16.432 07:52:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:16.432 07:52:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:16.432 07:52:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:16.432 07:52:07 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:16.432 07:52:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:16.432 07:52:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:16.432 07:52:08 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:16.432 00:05:16.432 real 0m0.194s 00:05:16.432 user 0m0.178s 00:05:16.432 sys 0m0.011s 00:05:16.432 07:52:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.432 07:52:08 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:16.432 ************************************ 00:05:16.432 END TEST rpc_trace_cmd_test 00:05:16.432 ************************************ 00:05:16.432 07:52:08 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:16.432 07:52:08 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:16.432 07:52:08 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:16.432 07:52:08 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:16.432 07:52:08 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.432 07:52:08 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.432 07:52:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.432 ************************************ 00:05:16.432 START TEST rpc_daemon_integrity 00:05:16.432 ************************************ 00:05:16.432 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:16.432 07:52:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:16.432 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.432 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.432 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.432 07:52:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:16.432 07:52:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:16.432 07:52:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:16.432 07:52:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:16.432 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.432 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.432 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.432 07:52:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:16.432 07:52:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:16.432 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.432 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.690 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.690 07:52:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:16.690 { 00:05:16.690 "name": "Malloc2", 00:05:16.690 "aliases": [ 00:05:16.690 "94519146-c493-4120-a2fe-064c84484980" 00:05:16.690 ], 00:05:16.690 "product_name": "Malloc disk", 00:05:16.690 "block_size": 512, 00:05:16.690 "num_blocks": 16384, 00:05:16.690 "uuid": "94519146-c493-4120-a2fe-064c84484980", 00:05:16.690 "assigned_rate_limits": { 00:05:16.690 "rw_ios_per_sec": 0, 00:05:16.690 "rw_mbytes_per_sec": 0, 00:05:16.690 "r_mbytes_per_sec": 0, 00:05:16.690 "w_mbytes_per_sec": 0 00:05:16.690 }, 00:05:16.690 "claimed": false, 00:05:16.690 "zoned": false, 00:05:16.690 "supported_io_types": { 00:05:16.690 "read": true, 00:05:16.690 "write": true, 00:05:16.690 "unmap": true, 00:05:16.690 "flush": true, 00:05:16.690 "reset": true, 00:05:16.690 "nvme_admin": false, 00:05:16.690 "nvme_io": false, 00:05:16.690 "nvme_io_md": false, 00:05:16.690 "write_zeroes": true, 00:05:16.690 "zcopy": true, 00:05:16.690 "get_zone_info": false, 00:05:16.690 "zone_management": false, 00:05:16.690 "zone_append": false, 00:05:16.690 "compare": false, 00:05:16.690 "compare_and_write": false, 00:05:16.690 "abort": true, 00:05:16.690 "seek_hole": false, 00:05:16.690 "seek_data": false, 00:05:16.690 "copy": true, 00:05:16.690 "nvme_iov_md": false 00:05:16.690 }, 00:05:16.690 "memory_domains": [ 00:05:16.690 { 00:05:16.690 "dma_device_id": "system", 00:05:16.690 "dma_device_type": 1 00:05:16.690 }, 00:05:16.690 { 00:05:16.690 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.690 "dma_device_type": 2 00:05:16.690 } 00:05:16.690 ], 00:05:16.690 "driver_specific": {} 00:05:16.690 } 00:05:16.690 ]' 00:05:16.690 07:52:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:16.690 07:52:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:16.690 07:52:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:16.690 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.690 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.690 [2024-07-13 07:52:08.209557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:16.690 [2024-07-13 07:52:08.209602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:16.690 [2024-07-13 07:52:08.209627] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1e56290 00:05:16.690 [2024-07-13 07:52:08.209642] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:16.690 [2024-07-13 07:52:08.211006] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:16.690 [2024-07-13 07:52:08.211031] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:16.690 Passthru0 00:05:16.690 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.690 07:52:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:16.690 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.690 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.690 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.690 07:52:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:16.690 { 00:05:16.690 "name": "Malloc2", 00:05:16.690 "aliases": [ 00:05:16.690 "94519146-c493-4120-a2fe-064c84484980" 00:05:16.690 ], 00:05:16.690 "product_name": "Malloc disk", 00:05:16.690 "block_size": 512, 00:05:16.690 "num_blocks": 16384, 00:05:16.690 "uuid": "94519146-c493-4120-a2fe-064c84484980", 00:05:16.690 "assigned_rate_limits": { 00:05:16.690 "rw_ios_per_sec": 0, 00:05:16.690 "rw_mbytes_per_sec": 0, 00:05:16.690 "r_mbytes_per_sec": 0, 00:05:16.690 "w_mbytes_per_sec": 0 00:05:16.690 }, 00:05:16.690 "claimed": true, 00:05:16.690 "claim_type": "exclusive_write", 00:05:16.690 "zoned": false, 00:05:16.690 "supported_io_types": { 00:05:16.690 "read": true, 00:05:16.690 "write": true, 00:05:16.690 "unmap": true, 00:05:16.690 "flush": true, 00:05:16.690 "reset": true, 00:05:16.690 "nvme_admin": false, 00:05:16.690 "nvme_io": false, 00:05:16.690 "nvme_io_md": false, 00:05:16.690 "write_zeroes": true, 00:05:16.690 "zcopy": true, 00:05:16.690 "get_zone_info": false, 00:05:16.690 "zone_management": false, 00:05:16.690 "zone_append": false, 00:05:16.690 "compare": false, 00:05:16.690 "compare_and_write": false, 00:05:16.690 "abort": true, 00:05:16.691 "seek_hole": false, 00:05:16.691 "seek_data": false, 00:05:16.691 "copy": true, 00:05:16.691 "nvme_iov_md": false 00:05:16.691 }, 00:05:16.691 "memory_domains": [ 00:05:16.691 { 00:05:16.691 "dma_device_id": "system", 00:05:16.691 "dma_device_type": 1 00:05:16.691 }, 00:05:16.691 { 00:05:16.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.691 "dma_device_type": 2 00:05:16.691 } 00:05:16.691 ], 00:05:16.691 "driver_specific": {} 00:05:16.691 }, 00:05:16.691 { 00:05:16.691 "name": "Passthru0", 00:05:16.691 "aliases": [ 00:05:16.691 "7ddea92c-3a17-53fa-9f0a-78aded35d8e3" 00:05:16.691 ], 00:05:16.691 "product_name": "passthru", 00:05:16.691 "block_size": 512, 00:05:16.691 "num_blocks": 16384, 00:05:16.691 "uuid": "7ddea92c-3a17-53fa-9f0a-78aded35d8e3", 00:05:16.691 "assigned_rate_limits": { 00:05:16.691 "rw_ios_per_sec": 0, 00:05:16.691 "rw_mbytes_per_sec": 0, 00:05:16.691 "r_mbytes_per_sec": 0, 00:05:16.691 "w_mbytes_per_sec": 0 00:05:16.691 }, 00:05:16.691 "claimed": false, 00:05:16.691 "zoned": false, 00:05:16.691 "supported_io_types": { 00:05:16.691 "read": true, 00:05:16.691 "write": true, 00:05:16.691 "unmap": true, 00:05:16.691 "flush": true, 00:05:16.691 "reset": true, 00:05:16.691 "nvme_admin": false, 00:05:16.691 "nvme_io": false, 00:05:16.691 "nvme_io_md": false, 00:05:16.691 "write_zeroes": true, 00:05:16.691 "zcopy": true, 00:05:16.691 "get_zone_info": false, 00:05:16.691 "zone_management": false, 00:05:16.691 "zone_append": false, 00:05:16.691 "compare": false, 00:05:16.691 "compare_and_write": false, 00:05:16.691 "abort": true, 00:05:16.691 "seek_hole": false, 00:05:16.691 "seek_data": false, 00:05:16.691 "copy": true, 00:05:16.691 "nvme_iov_md": false 00:05:16.691 }, 00:05:16.691 "memory_domains": [ 00:05:16.691 { 00:05:16.691 "dma_device_id": "system", 00:05:16.691 "dma_device_type": 1 00:05:16.691 }, 00:05:16.691 { 00:05:16.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:16.691 "dma_device_type": 2 00:05:16.691 } 00:05:16.691 ], 00:05:16.691 "driver_specific": { 00:05:16.691 "passthru": { 00:05:16.691 "name": "Passthru0", 00:05:16.691 "base_bdev_name": "Malloc2" 00:05:16.691 } 00:05:16.691 } 00:05:16.691 } 00:05:16.691 ]' 00:05:16.691 07:52:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:16.691 07:52:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:16.691 07:52:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:16.691 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.691 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.691 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.691 07:52:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:16.691 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.691 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.691 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.691 07:52:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:16.691 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:16.691 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.691 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:16.691 07:52:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:16.691 07:52:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:16.691 07:52:08 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:16.691 00:05:16.691 real 0m0.225s 00:05:16.691 user 0m0.148s 00:05:16.691 sys 0m0.023s 00:05:16.691 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.691 07:52:08 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:16.691 ************************************ 00:05:16.691 END TEST rpc_daemon_integrity 00:05:16.691 ************************************ 00:05:16.691 07:52:08 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:16.691 07:52:08 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:16.691 07:52:08 rpc -- rpc/rpc.sh@84 -- # killprocess 1823209 00:05:16.691 07:52:08 rpc -- common/autotest_common.sh@948 -- # '[' -z 1823209 ']' 00:05:16.691 07:52:08 rpc -- common/autotest_common.sh@952 -- # kill -0 1823209 00:05:16.691 07:52:08 rpc -- common/autotest_common.sh@953 -- # uname 00:05:16.691 07:52:08 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.691 07:52:08 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1823209 00:05:16.691 07:52:08 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.691 07:52:08 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.691 07:52:08 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1823209' 00:05:16.691 killing process with pid 1823209 00:05:16.691 07:52:08 rpc -- common/autotest_common.sh@967 -- # kill 1823209 00:05:16.691 07:52:08 rpc -- common/autotest_common.sh@972 -- # wait 1823209 00:05:17.256 00:05:17.256 real 0m1.918s 00:05:17.256 user 0m2.385s 00:05:17.256 sys 0m0.610s 00:05:17.256 07:52:08 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:17.256 07:52:08 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.256 ************************************ 00:05:17.256 END TEST rpc 00:05:17.256 ************************************ 00:05:17.256 07:52:08 -- common/autotest_common.sh@1142 -- # return 0 00:05:17.256 07:52:08 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:17.256 07:52:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.256 07:52:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.256 07:52:08 -- common/autotest_common.sh@10 -- # set +x 00:05:17.256 ************************************ 00:05:17.256 START TEST skip_rpc 00:05:17.256 ************************************ 00:05:17.256 07:52:08 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:17.256 * Looking for test storage... 00:05:17.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:17.256 07:52:08 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:17.256 07:52:08 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:17.256 07:52:08 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:17.256 07:52:08 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:17.256 07:52:08 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.256 07:52:08 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.256 ************************************ 00:05:17.256 START TEST skip_rpc 00:05:17.256 ************************************ 00:05:17.256 07:52:08 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:17.256 07:52:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1823645 00:05:17.256 07:52:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:17.256 07:52:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.256 07:52:08 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:17.256 [2024-07-13 07:52:08.958224] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:17.256 [2024-07-13 07:52:08.958302] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1823645 ] 00:05:17.256 EAL: No free 2048 kB hugepages reported on node 1 00:05:17.514 [2024-07-13 07:52:09.016067] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.514 [2024-07-13 07:52:09.103966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.771 07:52:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1823645 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1823645 ']' 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1823645 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1823645 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1823645' 00:05:22.772 killing process with pid 1823645 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1823645 00:05:22.772 07:52:13 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1823645 00:05:22.772 00:05:22.772 real 0m5.446s 00:05:22.772 user 0m5.124s 00:05:22.772 sys 0m0.330s 00:05:22.772 07:52:14 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.772 07:52:14 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.772 ************************************ 00:05:22.772 END TEST skip_rpc 00:05:22.772 ************************************ 00:05:22.772 07:52:14 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:22.772 07:52:14 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:22.772 07:52:14 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.772 07:52:14 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.772 07:52:14 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.772 ************************************ 00:05:22.772 START TEST skip_rpc_with_json 00:05:22.772 ************************************ 00:05:22.772 07:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:22.772 07:52:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:22.772 07:52:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1824331 00:05:22.772 07:52:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.772 07:52:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.772 07:52:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1824331 00:05:22.772 07:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1824331 ']' 00:05:22.772 07:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.772 07:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.772 07:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.772 07:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.772 07:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.772 [2024-07-13 07:52:14.455693] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:22.772 [2024-07-13 07:52:14.455785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1824331 ] 00:05:22.772 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.030 [2024-07-13 07:52:14.520227] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.030 [2024-07-13 07:52:14.610413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.287 07:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.287 07:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:23.287 07:52:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:23.287 07:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.287 07:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.287 [2024-07-13 07:52:14.873668] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:23.287 request: 00:05:23.287 { 00:05:23.287 "trtype": "tcp", 00:05:23.287 "method": "nvmf_get_transports", 00:05:23.287 "req_id": 1 00:05:23.287 } 00:05:23.287 Got JSON-RPC error response 00:05:23.287 response: 00:05:23.287 { 00:05:23.287 "code": -19, 00:05:23.287 "message": "No such device" 00:05:23.287 } 00:05:23.287 07:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:23.287 07:52:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:23.287 07:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.287 07:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.287 [2024-07-13 07:52:14.881798] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:23.287 07:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.287 07:52:14 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:23.287 07:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.287 07:52:14 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:23.546 07:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.546 07:52:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:23.546 { 00:05:23.546 "subsystems": [ 00:05:23.546 { 00:05:23.546 "subsystem": "vfio_user_target", 00:05:23.546 "config": null 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "subsystem": "keyring", 00:05:23.546 "config": [] 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "subsystem": "iobuf", 00:05:23.546 "config": [ 00:05:23.546 { 00:05:23.546 "method": "iobuf_set_options", 00:05:23.546 "params": { 00:05:23.546 "small_pool_count": 8192, 00:05:23.546 "large_pool_count": 1024, 00:05:23.546 "small_bufsize": 8192, 00:05:23.546 "large_bufsize": 135168 00:05:23.546 } 00:05:23.546 } 00:05:23.546 ] 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "subsystem": "sock", 00:05:23.546 "config": [ 00:05:23.546 { 00:05:23.546 "method": "sock_set_default_impl", 00:05:23.546 "params": { 00:05:23.546 "impl_name": "posix" 00:05:23.546 } 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "method": "sock_impl_set_options", 00:05:23.546 "params": { 00:05:23.546 "impl_name": "ssl", 00:05:23.546 "recv_buf_size": 4096, 00:05:23.546 "send_buf_size": 4096, 00:05:23.546 "enable_recv_pipe": true, 00:05:23.546 "enable_quickack": false, 00:05:23.546 "enable_placement_id": 0, 00:05:23.546 "enable_zerocopy_send_server": true, 00:05:23.546 "enable_zerocopy_send_client": false, 00:05:23.546 "zerocopy_threshold": 0, 00:05:23.546 "tls_version": 0, 00:05:23.546 "enable_ktls": false 00:05:23.546 } 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "method": "sock_impl_set_options", 00:05:23.546 "params": { 00:05:23.546 "impl_name": "posix", 00:05:23.546 "recv_buf_size": 2097152, 00:05:23.546 "send_buf_size": 2097152, 00:05:23.546 "enable_recv_pipe": true, 00:05:23.546 "enable_quickack": false, 00:05:23.546 "enable_placement_id": 0, 00:05:23.546 "enable_zerocopy_send_server": true, 00:05:23.546 "enable_zerocopy_send_client": false, 00:05:23.546 "zerocopy_threshold": 0, 00:05:23.546 "tls_version": 0, 00:05:23.546 "enable_ktls": false 00:05:23.546 } 00:05:23.546 } 00:05:23.546 ] 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "subsystem": "vmd", 00:05:23.546 "config": [] 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "subsystem": "accel", 00:05:23.546 "config": [ 00:05:23.546 { 00:05:23.546 "method": "accel_set_options", 00:05:23.546 "params": { 00:05:23.546 "small_cache_size": 128, 00:05:23.546 "large_cache_size": 16, 00:05:23.546 "task_count": 2048, 00:05:23.546 "sequence_count": 2048, 00:05:23.546 "buf_count": 2048 00:05:23.546 } 00:05:23.546 } 00:05:23.546 ] 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "subsystem": "bdev", 00:05:23.546 "config": [ 00:05:23.546 { 00:05:23.546 "method": "bdev_set_options", 00:05:23.546 "params": { 00:05:23.546 "bdev_io_pool_size": 65535, 00:05:23.546 "bdev_io_cache_size": 256, 00:05:23.546 "bdev_auto_examine": true, 00:05:23.546 "iobuf_small_cache_size": 128, 00:05:23.546 "iobuf_large_cache_size": 16 00:05:23.546 } 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "method": "bdev_raid_set_options", 00:05:23.546 "params": { 00:05:23.546 "process_window_size_kb": 1024 00:05:23.546 } 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "method": "bdev_iscsi_set_options", 00:05:23.546 "params": { 00:05:23.546 "timeout_sec": 30 00:05:23.546 } 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "method": "bdev_nvme_set_options", 00:05:23.546 "params": { 00:05:23.546 "action_on_timeout": "none", 00:05:23.546 "timeout_us": 0, 00:05:23.546 "timeout_admin_us": 0, 00:05:23.546 "keep_alive_timeout_ms": 10000, 00:05:23.546 "arbitration_burst": 0, 00:05:23.546 "low_priority_weight": 0, 00:05:23.546 "medium_priority_weight": 0, 00:05:23.546 "high_priority_weight": 0, 00:05:23.546 "nvme_adminq_poll_period_us": 10000, 00:05:23.546 "nvme_ioq_poll_period_us": 0, 00:05:23.546 "io_queue_requests": 0, 00:05:23.546 "delay_cmd_submit": true, 00:05:23.546 "transport_retry_count": 4, 00:05:23.546 "bdev_retry_count": 3, 00:05:23.546 "transport_ack_timeout": 0, 00:05:23.546 "ctrlr_loss_timeout_sec": 0, 00:05:23.546 "reconnect_delay_sec": 0, 00:05:23.546 "fast_io_fail_timeout_sec": 0, 00:05:23.546 "disable_auto_failback": false, 00:05:23.546 "generate_uuids": false, 00:05:23.546 "transport_tos": 0, 00:05:23.546 "nvme_error_stat": false, 00:05:23.546 "rdma_srq_size": 0, 00:05:23.546 "io_path_stat": false, 00:05:23.546 "allow_accel_sequence": false, 00:05:23.546 "rdma_max_cq_size": 0, 00:05:23.546 "rdma_cm_event_timeout_ms": 0, 00:05:23.546 "dhchap_digests": [ 00:05:23.546 "sha256", 00:05:23.546 "sha384", 00:05:23.546 "sha512" 00:05:23.546 ], 00:05:23.546 "dhchap_dhgroups": [ 00:05:23.546 "null", 00:05:23.546 "ffdhe2048", 00:05:23.546 "ffdhe3072", 00:05:23.546 "ffdhe4096", 00:05:23.546 "ffdhe6144", 00:05:23.546 "ffdhe8192" 00:05:23.546 ] 00:05:23.546 } 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "method": "bdev_nvme_set_hotplug", 00:05:23.546 "params": { 00:05:23.546 "period_us": 100000, 00:05:23.546 "enable": false 00:05:23.546 } 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "method": "bdev_wait_for_examine" 00:05:23.546 } 00:05:23.546 ] 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "subsystem": "scsi", 00:05:23.546 "config": null 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "subsystem": "scheduler", 00:05:23.546 "config": [ 00:05:23.546 { 00:05:23.546 "method": "framework_set_scheduler", 00:05:23.546 "params": { 00:05:23.546 "name": "static" 00:05:23.546 } 00:05:23.546 } 00:05:23.546 ] 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "subsystem": "vhost_scsi", 00:05:23.546 "config": [] 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "subsystem": "vhost_blk", 00:05:23.546 "config": [] 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "subsystem": "ublk", 00:05:23.546 "config": [] 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "subsystem": "nbd", 00:05:23.546 "config": [] 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "subsystem": "nvmf", 00:05:23.546 "config": [ 00:05:23.546 { 00:05:23.546 "method": "nvmf_set_config", 00:05:23.546 "params": { 00:05:23.546 "discovery_filter": "match_any", 00:05:23.546 "admin_cmd_passthru": { 00:05:23.546 "identify_ctrlr": false 00:05:23.546 } 00:05:23.546 } 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "method": "nvmf_set_max_subsystems", 00:05:23.546 "params": { 00:05:23.546 "max_subsystems": 1024 00:05:23.546 } 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "method": "nvmf_set_crdt", 00:05:23.546 "params": { 00:05:23.546 "crdt1": 0, 00:05:23.546 "crdt2": 0, 00:05:23.546 "crdt3": 0 00:05:23.546 } 00:05:23.546 }, 00:05:23.546 { 00:05:23.546 "method": "nvmf_create_transport", 00:05:23.546 "params": { 00:05:23.546 "trtype": "TCP", 00:05:23.546 "max_queue_depth": 128, 00:05:23.546 "max_io_qpairs_per_ctrlr": 127, 00:05:23.546 "in_capsule_data_size": 4096, 00:05:23.546 "max_io_size": 131072, 00:05:23.546 "io_unit_size": 131072, 00:05:23.546 "max_aq_depth": 128, 00:05:23.546 "num_shared_buffers": 511, 00:05:23.546 "buf_cache_size": 4294967295, 00:05:23.546 "dif_insert_or_strip": false, 00:05:23.546 "zcopy": false, 00:05:23.546 "c2h_success": true, 00:05:23.546 "sock_priority": 0, 00:05:23.546 "abort_timeout_sec": 1, 00:05:23.546 "ack_timeout": 0, 00:05:23.546 "data_wr_pool_size": 0 00:05:23.546 } 00:05:23.546 } 00:05:23.547 ] 00:05:23.547 }, 00:05:23.547 { 00:05:23.547 "subsystem": "iscsi", 00:05:23.547 "config": [ 00:05:23.547 { 00:05:23.547 "method": "iscsi_set_options", 00:05:23.547 "params": { 00:05:23.547 "node_base": "iqn.2016-06.io.spdk", 00:05:23.547 "max_sessions": 128, 00:05:23.547 "max_connections_per_session": 2, 00:05:23.547 "max_queue_depth": 64, 00:05:23.547 "default_time2wait": 2, 00:05:23.547 "default_time2retain": 20, 00:05:23.547 "first_burst_length": 8192, 00:05:23.547 "immediate_data": true, 00:05:23.547 "allow_duplicated_isid": false, 00:05:23.547 "error_recovery_level": 0, 00:05:23.547 "nop_timeout": 60, 00:05:23.547 "nop_in_interval": 30, 00:05:23.547 "disable_chap": false, 00:05:23.547 "require_chap": false, 00:05:23.547 "mutual_chap": false, 00:05:23.547 "chap_group": 0, 00:05:23.547 "max_large_datain_per_connection": 64, 00:05:23.547 "max_r2t_per_connection": 4, 00:05:23.547 "pdu_pool_size": 36864, 00:05:23.547 "immediate_data_pool_size": 16384, 00:05:23.547 "data_out_pool_size": 2048 00:05:23.547 } 00:05:23.547 } 00:05:23.547 ] 00:05:23.547 } 00:05:23.547 ] 00:05:23.547 } 00:05:23.547 07:52:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:23.547 07:52:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1824331 00:05:23.547 07:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1824331 ']' 00:05:23.547 07:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1824331 00:05:23.547 07:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:23.547 07:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:23.547 07:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1824331 00:05:23.547 07:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:23.547 07:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:23.547 07:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1824331' 00:05:23.547 killing process with pid 1824331 00:05:23.547 07:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1824331 00:05:23.547 07:52:15 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1824331 00:05:23.805 07:52:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1824471 00:05:23.805 07:52:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:23.805 07:52:15 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:29.090 07:52:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1824471 00:05:29.090 07:52:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1824471 ']' 00:05:29.090 07:52:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1824471 00:05:29.090 07:52:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:29.090 07:52:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:29.090 07:52:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1824471 00:05:29.090 07:52:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:29.090 07:52:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:29.090 07:52:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1824471' 00:05:29.090 killing process with pid 1824471 00:05:29.090 07:52:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1824471 00:05:29.090 07:52:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1824471 00:05:29.348 07:52:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:29.348 07:52:20 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:29.348 00:05:29.348 real 0m6.502s 00:05:29.348 user 0m6.071s 00:05:29.348 sys 0m0.705s 00:05:29.348 07:52:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.348 07:52:20 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:29.348 ************************************ 00:05:29.348 END TEST skip_rpc_with_json 00:05:29.348 ************************************ 00:05:29.348 07:52:20 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:29.348 07:52:20 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:29.348 07:52:20 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.348 07:52:20 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.348 07:52:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.348 ************************************ 00:05:29.348 START TEST skip_rpc_with_delay 00:05:29.348 ************************************ 00:05:29.348 07:52:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:29.348 07:52:20 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:29.348 07:52:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:29.348 07:52:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:29.348 07:52:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.348 07:52:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.348 07:52:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.348 07:52:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.348 07:52:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.348 07:52:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.348 07:52:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.348 07:52:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:29.348 07:52:20 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:29.348 [2024-07-13 07:52:21.005199] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:29.348 [2024-07-13 07:52:21.005315] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:29.348 07:52:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:29.348 07:52:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:29.348 07:52:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:29.348 07:52:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:29.348 00:05:29.348 real 0m0.068s 00:05:29.348 user 0m0.042s 00:05:29.348 sys 0m0.025s 00:05:29.348 07:52:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.348 07:52:21 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:29.348 ************************************ 00:05:29.348 END TEST skip_rpc_with_delay 00:05:29.348 ************************************ 00:05:29.348 07:52:21 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:29.348 07:52:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:29.348 07:52:21 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:29.348 07:52:21 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:29.348 07:52:21 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:29.348 07:52:21 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.348 07:52:21 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.348 ************************************ 00:05:29.348 START TEST exit_on_failed_rpc_init 00:05:29.348 ************************************ 00:05:29.348 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:29.348 07:52:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1825190 00:05:29.348 07:52:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:29.348 07:52:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1825190 00:05:29.348 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1825190 ']' 00:05:29.348 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.348 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:29.348 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.348 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:29.348 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:29.606 [2024-07-13 07:52:21.120615] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:29.606 [2024-07-13 07:52:21.120715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1825190 ] 00:05:29.606 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.606 [2024-07-13 07:52:21.178363] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:29.606 [2024-07-13 07:52:21.265147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.864 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:29.864 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:29.864 07:52:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:29.864 07:52:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:29.864 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:29.864 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:29.864 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.864 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.864 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.864 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.864 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.864 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:29.864 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:29.864 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:29.864 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:29.864 [2024-07-13 07:52:21.561079] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:29.864 [2024-07-13 07:52:21.561169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1825205 ] 00:05:29.864 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.122 [2024-07-13 07:52:21.621940] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.122 [2024-07-13 07:52:21.715480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.122 [2024-07-13 07:52:21.715597] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:30.122 [2024-07-13 07:52:21.715619] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:30.122 [2024-07-13 07:52:21.715633] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:30.122 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:30.122 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:30.122 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:30.122 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:30.122 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:30.122 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:30.122 07:52:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:30.122 07:52:21 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1825190 00:05:30.122 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1825190 ']' 00:05:30.122 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1825190 00:05:30.122 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:30.122 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:30.122 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1825190 00:05:30.122 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:30.122 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:30.122 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1825190' 00:05:30.122 killing process with pid 1825190 00:05:30.122 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1825190 00:05:30.122 07:52:21 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1825190 00:05:30.688 00:05:30.688 real 0m1.174s 00:05:30.688 user 0m1.290s 00:05:30.688 sys 0m0.442s 00:05:30.688 07:52:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.688 07:52:22 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:30.688 ************************************ 00:05:30.688 END TEST exit_on_failed_rpc_init 00:05:30.688 ************************************ 00:05:30.688 07:52:22 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:30.688 07:52:22 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:30.688 00:05:30.688 real 0m13.435s 00:05:30.688 user 0m12.622s 00:05:30.688 sys 0m1.666s 00:05:30.688 07:52:22 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.688 07:52:22 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.688 ************************************ 00:05:30.688 END TEST skip_rpc 00:05:30.688 ************************************ 00:05:30.688 07:52:22 -- common/autotest_common.sh@1142 -- # return 0 00:05:30.688 07:52:22 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:30.688 07:52:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.688 07:52:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.688 07:52:22 -- common/autotest_common.sh@10 -- # set +x 00:05:30.688 ************************************ 00:05:30.688 START TEST rpc_client 00:05:30.688 ************************************ 00:05:30.688 07:52:22 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:30.688 * Looking for test storage... 00:05:30.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:30.688 07:52:22 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:30.688 OK 00:05:30.688 07:52:22 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:30.688 00:05:30.688 real 0m0.065s 00:05:30.688 user 0m0.032s 00:05:30.688 sys 0m0.037s 00:05:30.688 07:52:22 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.688 07:52:22 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:30.688 ************************************ 00:05:30.688 END TEST rpc_client 00:05:30.688 ************************************ 00:05:30.688 07:52:22 -- common/autotest_common.sh@1142 -- # return 0 00:05:30.688 07:52:22 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:30.688 07:52:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.688 07:52:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.688 07:52:22 -- common/autotest_common.sh@10 -- # set +x 00:05:30.947 ************************************ 00:05:30.947 START TEST json_config 00:05:30.947 ************************************ 00:05:30.947 07:52:22 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:30.947 07:52:22 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:30.947 07:52:22 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:30.947 07:52:22 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:30.947 07:52:22 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:30.947 07:52:22 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.947 07:52:22 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.947 07:52:22 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.947 07:52:22 json_config -- paths/export.sh@5 -- # export PATH 00:05:30.947 07:52:22 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@47 -- # : 0 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:30.947 07:52:22 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:30.947 07:52:22 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:30.947 07:52:22 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:30.947 07:52:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:30.947 07:52:22 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:30.947 07:52:22 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:30.947 07:52:22 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:30.947 07:52:22 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:30.947 07:52:22 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:30.947 07:52:22 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:30.947 07:52:22 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:30.947 07:52:22 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:30.947 07:52:22 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:30.947 07:52:22 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:30.947 07:52:22 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:30.947 07:52:22 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:30.947 07:52:22 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:30.947 INFO: JSON configuration test init 00:05:30.947 07:52:22 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:30.947 07:52:22 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:30.947 07:52:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.947 07:52:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.947 07:52:22 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:30.948 07:52:22 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:30.948 07:52:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.948 07:52:22 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:30.948 07:52:22 json_config -- json_config/common.sh@9 -- # local app=target 00:05:30.948 07:52:22 json_config -- json_config/common.sh@10 -- # shift 00:05:30.948 07:52:22 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:30.948 07:52:22 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:30.948 07:52:22 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:30.948 07:52:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.948 07:52:22 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:30.948 07:52:22 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1825443 00:05:30.948 07:52:22 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:30.948 07:52:22 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:30.948 Waiting for target to run... 00:05:30.948 07:52:22 json_config -- json_config/common.sh@25 -- # waitforlisten 1825443 /var/tmp/spdk_tgt.sock 00:05:30.948 07:52:22 json_config -- common/autotest_common.sh@829 -- # '[' -z 1825443 ']' 00:05:30.948 07:52:22 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:30.948 07:52:22 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:30.948 07:52:22 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:30.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:30.948 07:52:22 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:30.948 07:52:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.948 [2024-07-13 07:52:22.522182] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:30.948 [2024-07-13 07:52:22.522280] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1825443 ] 00:05:30.948 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.206 [2024-07-13 07:52:22.885181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.463 [2024-07-13 07:52:22.951602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.028 07:52:23 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.028 07:52:23 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:32.028 07:52:23 json_config -- json_config/common.sh@26 -- # echo '' 00:05:32.028 00:05:32.028 07:52:23 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:32.028 07:52:23 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:32.028 07:52:23 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:32.028 07:52:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.028 07:52:23 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:32.028 07:52:23 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:32.028 07:52:23 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:32.028 07:52:23 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:32.028 07:52:23 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:32.028 07:52:23 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:32.028 07:52:23 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:35.307 07:52:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:35.307 07:52:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:35.307 07:52:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:35.307 07:52:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:35.307 07:52:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:35.307 07:52:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:35.307 07:52:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:35.307 07:52:26 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:35.307 07:52:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:35.565 MallocForNvmf0 00:05:35.565 07:52:27 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:35.565 07:52:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:35.823 MallocForNvmf1 00:05:35.823 07:52:27 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:35.823 07:52:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:36.081 [2024-07-13 07:52:27.610970] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:36.081 07:52:27 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:36.081 07:52:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:36.339 07:52:27 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:36.339 07:52:27 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:36.597 07:52:28 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:36.597 07:52:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:36.855 07:52:28 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:36.855 07:52:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:37.112 [2024-07-13 07:52:28.590303] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:37.112 07:52:28 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:37.112 07:52:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:37.112 07:52:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.112 07:52:28 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:37.112 07:52:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:37.112 07:52:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.112 07:52:28 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:37.112 07:52:28 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:37.112 07:52:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:37.370 MallocBdevForConfigChangeCheck 00:05:37.370 07:52:28 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:37.370 07:52:28 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:37.370 07:52:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:37.370 07:52:28 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:37.370 07:52:28 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:37.628 07:52:29 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:37.628 INFO: shutting down applications... 00:05:37.628 07:52:29 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:37.628 07:52:29 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:37.628 07:52:29 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:37.628 07:52:29 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:39.522 Calling clear_iscsi_subsystem 00:05:39.522 Calling clear_nvmf_subsystem 00:05:39.522 Calling clear_nbd_subsystem 00:05:39.522 Calling clear_ublk_subsystem 00:05:39.522 Calling clear_vhost_blk_subsystem 00:05:39.522 Calling clear_vhost_scsi_subsystem 00:05:39.522 Calling clear_bdev_subsystem 00:05:39.522 07:52:30 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:39.522 07:52:30 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:39.522 07:52:30 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:39.522 07:52:30 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:39.522 07:52:30 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:39.522 07:52:30 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:39.779 07:52:31 json_config -- json_config/json_config.sh@345 -- # break 00:05:39.779 07:52:31 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:39.779 07:52:31 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:39.779 07:52:31 json_config -- json_config/common.sh@31 -- # local app=target 00:05:39.779 07:52:31 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:39.779 07:52:31 json_config -- json_config/common.sh@35 -- # [[ -n 1825443 ]] 00:05:39.779 07:52:31 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1825443 00:05:39.779 07:52:31 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:39.779 07:52:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.779 07:52:31 json_config -- json_config/common.sh@41 -- # kill -0 1825443 00:05:39.779 07:52:31 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:40.344 07:52:31 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:40.344 07:52:31 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:40.344 07:52:31 json_config -- json_config/common.sh@41 -- # kill -0 1825443 00:05:40.344 07:52:31 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:40.344 07:52:31 json_config -- json_config/common.sh@43 -- # break 00:05:40.344 07:52:31 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:40.344 07:52:31 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:40.344 SPDK target shutdown done 00:05:40.344 07:52:31 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:40.344 INFO: relaunching applications... 00:05:40.344 07:52:31 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.344 07:52:31 json_config -- json_config/common.sh@9 -- # local app=target 00:05:40.344 07:52:31 json_config -- json_config/common.sh@10 -- # shift 00:05:40.344 07:52:31 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:40.344 07:52:31 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:40.344 07:52:31 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:40.344 07:52:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.344 07:52:31 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.344 07:52:31 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1826634 00:05:40.344 07:52:31 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.344 07:52:31 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:40.344 Waiting for target to run... 00:05:40.344 07:52:31 json_config -- json_config/common.sh@25 -- # waitforlisten 1826634 /var/tmp/spdk_tgt.sock 00:05:40.344 07:52:31 json_config -- common/autotest_common.sh@829 -- # '[' -z 1826634 ']' 00:05:40.344 07:52:31 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:40.344 07:52:31 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.344 07:52:31 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:40.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:40.344 07:52:31 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.344 07:52:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.344 [2024-07-13 07:52:31.872763] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:40.344 [2024-07-13 07:52:31.872857] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1826634 ] 00:05:40.344 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.602 [2024-07-13 07:52:32.223246] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.602 [2024-07-13 07:52:32.286375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.954 [2024-07-13 07:52:35.315483] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:43.954 [2024-07-13 07:52:35.348010] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:43.954 07:52:35 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.954 07:52:35 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:43.954 07:52:35 json_config -- json_config/common.sh@26 -- # echo '' 00:05:43.954 00:05:43.954 07:52:35 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:43.954 07:52:35 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:43.954 INFO: Checking if target configuration is the same... 00:05:43.954 07:52:35 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.954 07:52:35 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:43.954 07:52:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:43.954 + '[' 2 -ne 2 ']' 00:05:43.954 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:43.954 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:43.954 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:43.954 +++ basename /dev/fd/62 00:05:43.954 ++ mktemp /tmp/62.XXX 00:05:43.954 + tmp_file_1=/tmp/62.TYI 00:05:43.954 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:43.954 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:43.954 + tmp_file_2=/tmp/spdk_tgt_config.json.SVJ 00:05:43.954 + ret=0 00:05:43.954 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:44.211 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:44.211 + diff -u /tmp/62.TYI /tmp/spdk_tgt_config.json.SVJ 00:05:44.211 + echo 'INFO: JSON config files are the same' 00:05:44.211 INFO: JSON config files are the same 00:05:44.211 + rm /tmp/62.TYI /tmp/spdk_tgt_config.json.SVJ 00:05:44.211 + exit 0 00:05:44.211 07:52:35 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:44.211 07:52:35 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:44.211 INFO: changing configuration and checking if this can be detected... 00:05:44.211 07:52:35 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:44.211 07:52:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:44.469 07:52:36 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:44.469 07:52:36 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:44.469 07:52:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:44.469 + '[' 2 -ne 2 ']' 00:05:44.469 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:44.469 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:44.469 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:44.469 +++ basename /dev/fd/62 00:05:44.469 ++ mktemp /tmp/62.XXX 00:05:44.469 + tmp_file_1=/tmp/62.DW2 00:05:44.469 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:44.469 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:44.469 + tmp_file_2=/tmp/spdk_tgt_config.json.n1i 00:05:44.469 + ret=0 00:05:44.469 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:44.727 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:44.985 + diff -u /tmp/62.DW2 /tmp/spdk_tgt_config.json.n1i 00:05:44.985 + ret=1 00:05:44.985 + echo '=== Start of file: /tmp/62.DW2 ===' 00:05:44.985 + cat /tmp/62.DW2 00:05:44.985 + echo '=== End of file: /tmp/62.DW2 ===' 00:05:44.985 + echo '' 00:05:44.985 + echo '=== Start of file: /tmp/spdk_tgt_config.json.n1i ===' 00:05:44.985 + cat /tmp/spdk_tgt_config.json.n1i 00:05:44.985 + echo '=== End of file: /tmp/spdk_tgt_config.json.n1i ===' 00:05:44.985 + echo '' 00:05:44.985 + rm /tmp/62.DW2 /tmp/spdk_tgt_config.json.n1i 00:05:44.985 + exit 1 00:05:44.985 07:52:36 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:44.985 INFO: configuration change detected. 00:05:44.985 07:52:36 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:44.985 07:52:36 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:44.985 07:52:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:44.985 07:52:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.985 07:52:36 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:44.985 07:52:36 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:44.985 07:52:36 json_config -- json_config/json_config.sh@317 -- # [[ -n 1826634 ]] 00:05:44.985 07:52:36 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:44.985 07:52:36 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:44.985 07:52:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:44.985 07:52:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.985 07:52:36 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:44.985 07:52:36 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:44.985 07:52:36 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:44.985 07:52:36 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:44.985 07:52:36 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:44.985 07:52:36 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:44.985 07:52:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:44.985 07:52:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.985 07:52:36 json_config -- json_config/json_config.sh@323 -- # killprocess 1826634 00:05:44.985 07:52:36 json_config -- common/autotest_common.sh@948 -- # '[' -z 1826634 ']' 00:05:44.985 07:52:36 json_config -- common/autotest_common.sh@952 -- # kill -0 1826634 00:05:44.985 07:52:36 json_config -- common/autotest_common.sh@953 -- # uname 00:05:44.985 07:52:36 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.985 07:52:36 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1826634 00:05:44.985 07:52:36 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.985 07:52:36 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.985 07:52:36 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1826634' 00:05:44.985 killing process with pid 1826634 00:05:44.985 07:52:36 json_config -- common/autotest_common.sh@967 -- # kill 1826634 00:05:44.985 07:52:36 json_config -- common/autotest_common.sh@972 -- # wait 1826634 00:05:46.884 07:52:38 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.884 07:52:38 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:46.884 07:52:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:46.884 07:52:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.884 07:52:38 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:46.884 07:52:38 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:46.884 INFO: Success 00:05:46.884 00:05:46.884 real 0m15.779s 00:05:46.884 user 0m17.687s 00:05:46.884 sys 0m1.797s 00:05:46.884 07:52:38 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:46.884 07:52:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.884 ************************************ 00:05:46.884 END TEST json_config 00:05:46.884 ************************************ 00:05:46.884 07:52:38 -- common/autotest_common.sh@1142 -- # return 0 00:05:46.884 07:52:38 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:46.884 07:52:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:46.884 07:52:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.884 07:52:38 -- common/autotest_common.sh@10 -- # set +x 00:05:46.884 ************************************ 00:05:46.884 START TEST json_config_extra_key 00:05:46.884 ************************************ 00:05:46.884 07:52:38 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:46.884 07:52:38 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:46.884 07:52:38 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:46.884 07:52:38 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:46.884 07:52:38 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:46.884 07:52:38 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.884 07:52:38 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.884 07:52:38 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.884 07:52:38 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:46.884 07:52:38 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:46.884 07:52:38 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:46.884 07:52:38 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:46.884 07:52:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:46.884 07:52:38 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:46.884 07:52:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:46.884 07:52:38 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:46.884 07:52:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:46.884 07:52:38 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:46.884 07:52:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:46.884 07:52:38 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:46.884 07:52:38 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:46.884 07:52:38 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:46.884 INFO: launching applications... 00:05:46.884 07:52:38 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:46.884 07:52:38 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:46.884 07:52:38 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:46.884 07:52:38 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:46.884 07:52:38 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:46.884 07:52:38 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:46.884 07:52:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.884 07:52:38 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.884 07:52:38 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1827543 00:05:46.884 07:52:38 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:46.884 07:52:38 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:46.884 Waiting for target to run... 00:05:46.884 07:52:38 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1827543 /var/tmp/spdk_tgt.sock 00:05:46.884 07:52:38 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1827543 ']' 00:05:46.884 07:52:38 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.884 07:52:38 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.884 07:52:38 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.884 07:52:38 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.884 07:52:38 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:46.884 [2024-07-13 07:52:38.348086] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:46.884 [2024-07-13 07:52:38.348183] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1827543 ] 00:05:46.884 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.142 [2024-07-13 07:52:38.844667] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.400 [2024-07-13 07:52:38.923292] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.659 07:52:39 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.659 07:52:39 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:47.659 07:52:39 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:47.659 00:05:47.659 07:52:39 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:47.659 INFO: shutting down applications... 00:05:47.659 07:52:39 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:47.659 07:52:39 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:47.659 07:52:39 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:47.659 07:52:39 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1827543 ]] 00:05:47.659 07:52:39 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1827543 00:05:47.659 07:52:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:47.659 07:52:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:47.659 07:52:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1827543 00:05:47.659 07:52:39 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:48.226 07:52:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:48.226 07:52:39 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:48.226 07:52:39 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1827543 00:05:48.226 07:52:39 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:48.226 07:52:39 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:48.226 07:52:39 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:48.226 07:52:39 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:48.226 SPDK target shutdown done 00:05:48.226 07:52:39 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:48.226 Success 00:05:48.226 00:05:48.226 real 0m1.531s 00:05:48.226 user 0m1.339s 00:05:48.226 sys 0m0.584s 00:05:48.226 07:52:39 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.226 07:52:39 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:48.226 ************************************ 00:05:48.226 END TEST json_config_extra_key 00:05:48.226 ************************************ 00:05:48.226 07:52:39 -- common/autotest_common.sh@1142 -- # return 0 00:05:48.226 07:52:39 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:48.226 07:52:39 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.226 07:52:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.226 07:52:39 -- common/autotest_common.sh@10 -- # set +x 00:05:48.226 ************************************ 00:05:48.226 START TEST alias_rpc 00:05:48.226 ************************************ 00:05:48.226 07:52:39 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:48.226 * Looking for test storage... 00:05:48.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:48.226 07:52:39 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:48.226 07:52:39 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1827819 00:05:48.226 07:52:39 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:48.226 07:52:39 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1827819 00:05:48.226 07:52:39 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1827819 ']' 00:05:48.226 07:52:39 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.226 07:52:39 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:48.226 07:52:39 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.226 07:52:39 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:48.227 07:52:39 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.227 [2024-07-13 07:52:39.932915] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:48.227 [2024-07-13 07:52:39.933004] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1827819 ] 00:05:48.485 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.485 [2024-07-13 07:52:39.996342] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.485 [2024-07-13 07:52:40.089201] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.742 07:52:40 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.742 07:52:40 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:48.742 07:52:40 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:49.000 07:52:40 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1827819 00:05:49.000 07:52:40 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1827819 ']' 00:05:49.000 07:52:40 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1827819 00:05:49.000 07:52:40 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:49.000 07:52:40 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.000 07:52:40 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1827819 00:05:49.000 07:52:40 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.000 07:52:40 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.000 07:52:40 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1827819' 00:05:49.000 killing process with pid 1827819 00:05:49.000 07:52:40 alias_rpc -- common/autotest_common.sh@967 -- # kill 1827819 00:05:49.000 07:52:40 alias_rpc -- common/autotest_common.sh@972 -- # wait 1827819 00:05:49.566 00:05:49.566 real 0m1.260s 00:05:49.566 user 0m1.365s 00:05:49.566 sys 0m0.434s 00:05:49.566 07:52:41 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.566 07:52:41 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.566 ************************************ 00:05:49.566 END TEST alias_rpc 00:05:49.566 ************************************ 00:05:49.566 07:52:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:49.566 07:52:41 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:49.566 07:52:41 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:49.566 07:52:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.566 07:52:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.566 07:52:41 -- common/autotest_common.sh@10 -- # set +x 00:05:49.566 ************************************ 00:05:49.566 START TEST spdkcli_tcp 00:05:49.566 ************************************ 00:05:49.566 07:52:41 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:49.566 * Looking for test storage... 00:05:49.566 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:49.566 07:52:41 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:49.566 07:52:41 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:49.566 07:52:41 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:49.566 07:52:41 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:49.566 07:52:41 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:49.566 07:52:41 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:49.566 07:52:41 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:49.566 07:52:41 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:49.566 07:52:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.566 07:52:41 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1828043 00:05:49.566 07:52:41 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:49.566 07:52:41 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1828043 00:05:49.566 07:52:41 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1828043 ']' 00:05:49.566 07:52:41 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.566 07:52:41 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.566 07:52:41 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.566 07:52:41 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.566 07:52:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:49.566 [2024-07-13 07:52:41.238876] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:49.566 [2024-07-13 07:52:41.238963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1828043 ] 00:05:49.566 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.825 [2024-07-13 07:52:41.305593] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:49.825 [2024-07-13 07:52:41.400890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.825 [2024-07-13 07:52:41.400901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.084 07:52:41 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.084 07:52:41 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:50.084 07:52:41 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1828049 00:05:50.084 07:52:41 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:50.084 07:52:41 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:50.343 [ 00:05:50.343 "bdev_malloc_delete", 00:05:50.343 "bdev_malloc_create", 00:05:50.343 "bdev_null_resize", 00:05:50.343 "bdev_null_delete", 00:05:50.343 "bdev_null_create", 00:05:50.343 "bdev_nvme_cuse_unregister", 00:05:50.343 "bdev_nvme_cuse_register", 00:05:50.343 "bdev_opal_new_user", 00:05:50.343 "bdev_opal_set_lock_state", 00:05:50.343 "bdev_opal_delete", 00:05:50.343 "bdev_opal_get_info", 00:05:50.343 "bdev_opal_create", 00:05:50.343 "bdev_nvme_opal_revert", 00:05:50.343 "bdev_nvme_opal_init", 00:05:50.343 "bdev_nvme_send_cmd", 00:05:50.343 "bdev_nvme_get_path_iostat", 00:05:50.343 "bdev_nvme_get_mdns_discovery_info", 00:05:50.343 "bdev_nvme_stop_mdns_discovery", 00:05:50.343 "bdev_nvme_start_mdns_discovery", 00:05:50.343 "bdev_nvme_set_multipath_policy", 00:05:50.343 "bdev_nvme_set_preferred_path", 00:05:50.343 "bdev_nvme_get_io_paths", 00:05:50.343 "bdev_nvme_remove_error_injection", 00:05:50.343 "bdev_nvme_add_error_injection", 00:05:50.343 "bdev_nvme_get_discovery_info", 00:05:50.343 "bdev_nvme_stop_discovery", 00:05:50.343 "bdev_nvme_start_discovery", 00:05:50.343 "bdev_nvme_get_controller_health_info", 00:05:50.343 "bdev_nvme_disable_controller", 00:05:50.343 "bdev_nvme_enable_controller", 00:05:50.343 "bdev_nvme_reset_controller", 00:05:50.343 "bdev_nvme_get_transport_statistics", 00:05:50.343 "bdev_nvme_apply_firmware", 00:05:50.343 "bdev_nvme_detach_controller", 00:05:50.343 "bdev_nvme_get_controllers", 00:05:50.343 "bdev_nvme_attach_controller", 00:05:50.343 "bdev_nvme_set_hotplug", 00:05:50.343 "bdev_nvme_set_options", 00:05:50.343 "bdev_passthru_delete", 00:05:50.343 "bdev_passthru_create", 00:05:50.343 "bdev_lvol_set_parent_bdev", 00:05:50.343 "bdev_lvol_set_parent", 00:05:50.343 "bdev_lvol_check_shallow_copy", 00:05:50.343 "bdev_lvol_start_shallow_copy", 00:05:50.343 "bdev_lvol_grow_lvstore", 00:05:50.343 "bdev_lvol_get_lvols", 00:05:50.343 "bdev_lvol_get_lvstores", 00:05:50.343 "bdev_lvol_delete", 00:05:50.343 "bdev_lvol_set_read_only", 00:05:50.343 "bdev_lvol_resize", 00:05:50.343 "bdev_lvol_decouple_parent", 00:05:50.343 "bdev_lvol_inflate", 00:05:50.343 "bdev_lvol_rename", 00:05:50.343 "bdev_lvol_clone_bdev", 00:05:50.343 "bdev_lvol_clone", 00:05:50.343 "bdev_lvol_snapshot", 00:05:50.343 "bdev_lvol_create", 00:05:50.343 "bdev_lvol_delete_lvstore", 00:05:50.343 "bdev_lvol_rename_lvstore", 00:05:50.343 "bdev_lvol_create_lvstore", 00:05:50.343 "bdev_raid_set_options", 00:05:50.343 "bdev_raid_remove_base_bdev", 00:05:50.343 "bdev_raid_add_base_bdev", 00:05:50.343 "bdev_raid_delete", 00:05:50.343 "bdev_raid_create", 00:05:50.343 "bdev_raid_get_bdevs", 00:05:50.343 "bdev_error_inject_error", 00:05:50.343 "bdev_error_delete", 00:05:50.343 "bdev_error_create", 00:05:50.343 "bdev_split_delete", 00:05:50.343 "bdev_split_create", 00:05:50.343 "bdev_delay_delete", 00:05:50.343 "bdev_delay_create", 00:05:50.343 "bdev_delay_update_latency", 00:05:50.343 "bdev_zone_block_delete", 00:05:50.343 "bdev_zone_block_create", 00:05:50.343 "blobfs_create", 00:05:50.343 "blobfs_detect", 00:05:50.343 "blobfs_set_cache_size", 00:05:50.343 "bdev_aio_delete", 00:05:50.343 "bdev_aio_rescan", 00:05:50.343 "bdev_aio_create", 00:05:50.343 "bdev_ftl_set_property", 00:05:50.343 "bdev_ftl_get_properties", 00:05:50.343 "bdev_ftl_get_stats", 00:05:50.343 "bdev_ftl_unmap", 00:05:50.343 "bdev_ftl_unload", 00:05:50.343 "bdev_ftl_delete", 00:05:50.343 "bdev_ftl_load", 00:05:50.343 "bdev_ftl_create", 00:05:50.343 "bdev_virtio_attach_controller", 00:05:50.343 "bdev_virtio_scsi_get_devices", 00:05:50.343 "bdev_virtio_detach_controller", 00:05:50.343 "bdev_virtio_blk_set_hotplug", 00:05:50.343 "bdev_iscsi_delete", 00:05:50.343 "bdev_iscsi_create", 00:05:50.343 "bdev_iscsi_set_options", 00:05:50.343 "accel_error_inject_error", 00:05:50.343 "ioat_scan_accel_module", 00:05:50.343 "dsa_scan_accel_module", 00:05:50.343 "iaa_scan_accel_module", 00:05:50.343 "vfu_virtio_create_scsi_endpoint", 00:05:50.343 "vfu_virtio_scsi_remove_target", 00:05:50.343 "vfu_virtio_scsi_add_target", 00:05:50.343 "vfu_virtio_create_blk_endpoint", 00:05:50.343 "vfu_virtio_delete_endpoint", 00:05:50.343 "keyring_file_remove_key", 00:05:50.343 "keyring_file_add_key", 00:05:50.343 "keyring_linux_set_options", 00:05:50.343 "iscsi_get_histogram", 00:05:50.343 "iscsi_enable_histogram", 00:05:50.343 "iscsi_set_options", 00:05:50.343 "iscsi_get_auth_groups", 00:05:50.343 "iscsi_auth_group_remove_secret", 00:05:50.343 "iscsi_auth_group_add_secret", 00:05:50.343 "iscsi_delete_auth_group", 00:05:50.343 "iscsi_create_auth_group", 00:05:50.343 "iscsi_set_discovery_auth", 00:05:50.343 "iscsi_get_options", 00:05:50.343 "iscsi_target_node_request_logout", 00:05:50.343 "iscsi_target_node_set_redirect", 00:05:50.343 "iscsi_target_node_set_auth", 00:05:50.343 "iscsi_target_node_add_lun", 00:05:50.343 "iscsi_get_stats", 00:05:50.343 "iscsi_get_connections", 00:05:50.343 "iscsi_portal_group_set_auth", 00:05:50.343 "iscsi_start_portal_group", 00:05:50.343 "iscsi_delete_portal_group", 00:05:50.343 "iscsi_create_portal_group", 00:05:50.343 "iscsi_get_portal_groups", 00:05:50.343 "iscsi_delete_target_node", 00:05:50.344 "iscsi_target_node_remove_pg_ig_maps", 00:05:50.344 "iscsi_target_node_add_pg_ig_maps", 00:05:50.344 "iscsi_create_target_node", 00:05:50.344 "iscsi_get_target_nodes", 00:05:50.344 "iscsi_delete_initiator_group", 00:05:50.344 "iscsi_initiator_group_remove_initiators", 00:05:50.344 "iscsi_initiator_group_add_initiators", 00:05:50.344 "iscsi_create_initiator_group", 00:05:50.344 "iscsi_get_initiator_groups", 00:05:50.344 "nvmf_set_crdt", 00:05:50.344 "nvmf_set_config", 00:05:50.344 "nvmf_set_max_subsystems", 00:05:50.344 "nvmf_stop_mdns_prr", 00:05:50.344 "nvmf_publish_mdns_prr", 00:05:50.344 "nvmf_subsystem_get_listeners", 00:05:50.344 "nvmf_subsystem_get_qpairs", 00:05:50.344 "nvmf_subsystem_get_controllers", 00:05:50.344 "nvmf_get_stats", 00:05:50.344 "nvmf_get_transports", 00:05:50.344 "nvmf_create_transport", 00:05:50.344 "nvmf_get_targets", 00:05:50.344 "nvmf_delete_target", 00:05:50.344 "nvmf_create_target", 00:05:50.344 "nvmf_subsystem_allow_any_host", 00:05:50.344 "nvmf_subsystem_remove_host", 00:05:50.344 "nvmf_subsystem_add_host", 00:05:50.344 "nvmf_ns_remove_host", 00:05:50.344 "nvmf_ns_add_host", 00:05:50.344 "nvmf_subsystem_remove_ns", 00:05:50.344 "nvmf_subsystem_add_ns", 00:05:50.344 "nvmf_subsystem_listener_set_ana_state", 00:05:50.344 "nvmf_discovery_get_referrals", 00:05:50.344 "nvmf_discovery_remove_referral", 00:05:50.344 "nvmf_discovery_add_referral", 00:05:50.344 "nvmf_subsystem_remove_listener", 00:05:50.344 "nvmf_subsystem_add_listener", 00:05:50.344 "nvmf_delete_subsystem", 00:05:50.344 "nvmf_create_subsystem", 00:05:50.344 "nvmf_get_subsystems", 00:05:50.344 "env_dpdk_get_mem_stats", 00:05:50.344 "nbd_get_disks", 00:05:50.344 "nbd_stop_disk", 00:05:50.344 "nbd_start_disk", 00:05:50.344 "ublk_recover_disk", 00:05:50.344 "ublk_get_disks", 00:05:50.344 "ublk_stop_disk", 00:05:50.344 "ublk_start_disk", 00:05:50.344 "ublk_destroy_target", 00:05:50.344 "ublk_create_target", 00:05:50.344 "virtio_blk_create_transport", 00:05:50.344 "virtio_blk_get_transports", 00:05:50.344 "vhost_controller_set_coalescing", 00:05:50.344 "vhost_get_controllers", 00:05:50.344 "vhost_delete_controller", 00:05:50.344 "vhost_create_blk_controller", 00:05:50.344 "vhost_scsi_controller_remove_target", 00:05:50.344 "vhost_scsi_controller_add_target", 00:05:50.344 "vhost_start_scsi_controller", 00:05:50.344 "vhost_create_scsi_controller", 00:05:50.344 "thread_set_cpumask", 00:05:50.344 "framework_get_governor", 00:05:50.344 "framework_get_scheduler", 00:05:50.344 "framework_set_scheduler", 00:05:50.344 "framework_get_reactors", 00:05:50.344 "thread_get_io_channels", 00:05:50.344 "thread_get_pollers", 00:05:50.344 "thread_get_stats", 00:05:50.344 "framework_monitor_context_switch", 00:05:50.344 "spdk_kill_instance", 00:05:50.344 "log_enable_timestamps", 00:05:50.344 "log_get_flags", 00:05:50.344 "log_clear_flag", 00:05:50.344 "log_set_flag", 00:05:50.344 "log_get_level", 00:05:50.344 "log_set_level", 00:05:50.344 "log_get_print_level", 00:05:50.344 "log_set_print_level", 00:05:50.344 "framework_enable_cpumask_locks", 00:05:50.344 "framework_disable_cpumask_locks", 00:05:50.344 "framework_wait_init", 00:05:50.344 "framework_start_init", 00:05:50.344 "scsi_get_devices", 00:05:50.344 "bdev_get_histogram", 00:05:50.344 "bdev_enable_histogram", 00:05:50.344 "bdev_set_qos_limit", 00:05:50.344 "bdev_set_qd_sampling_period", 00:05:50.344 "bdev_get_bdevs", 00:05:50.344 "bdev_reset_iostat", 00:05:50.344 "bdev_get_iostat", 00:05:50.344 "bdev_examine", 00:05:50.344 "bdev_wait_for_examine", 00:05:50.344 "bdev_set_options", 00:05:50.344 "notify_get_notifications", 00:05:50.344 "notify_get_types", 00:05:50.344 "accel_get_stats", 00:05:50.344 "accel_set_options", 00:05:50.344 "accel_set_driver", 00:05:50.344 "accel_crypto_key_destroy", 00:05:50.344 "accel_crypto_keys_get", 00:05:50.344 "accel_crypto_key_create", 00:05:50.344 "accel_assign_opc", 00:05:50.344 "accel_get_module_info", 00:05:50.344 "accel_get_opc_assignments", 00:05:50.344 "vmd_rescan", 00:05:50.344 "vmd_remove_device", 00:05:50.344 "vmd_enable", 00:05:50.344 "sock_get_default_impl", 00:05:50.344 "sock_set_default_impl", 00:05:50.344 "sock_impl_set_options", 00:05:50.344 "sock_impl_get_options", 00:05:50.344 "iobuf_get_stats", 00:05:50.344 "iobuf_set_options", 00:05:50.344 "keyring_get_keys", 00:05:50.344 "framework_get_pci_devices", 00:05:50.344 "framework_get_config", 00:05:50.344 "framework_get_subsystems", 00:05:50.344 "vfu_tgt_set_base_path", 00:05:50.344 "trace_get_info", 00:05:50.344 "trace_get_tpoint_group_mask", 00:05:50.344 "trace_disable_tpoint_group", 00:05:50.344 "trace_enable_tpoint_group", 00:05:50.344 "trace_clear_tpoint_mask", 00:05:50.344 "trace_set_tpoint_mask", 00:05:50.344 "spdk_get_version", 00:05:50.344 "rpc_get_methods" 00:05:50.344 ] 00:05:50.344 07:52:41 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:50.344 07:52:41 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:50.344 07:52:41 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.344 07:52:41 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:50.344 07:52:41 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1828043 00:05:50.344 07:52:41 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1828043 ']' 00:05:50.344 07:52:41 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1828043 00:05:50.344 07:52:41 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:50.344 07:52:41 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:50.344 07:52:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1828043 00:05:50.344 07:52:41 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:50.344 07:52:41 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:50.344 07:52:41 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1828043' 00:05:50.344 killing process with pid 1828043 00:05:50.344 07:52:41 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1828043 00:05:50.344 07:52:41 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1828043 00:05:50.911 00:05:50.911 real 0m1.221s 00:05:50.911 user 0m2.153s 00:05:50.911 sys 0m0.435s 00:05:50.911 07:52:42 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.911 07:52:42 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:50.911 ************************************ 00:05:50.911 END TEST spdkcli_tcp 00:05:50.911 ************************************ 00:05:50.911 07:52:42 -- common/autotest_common.sh@1142 -- # return 0 00:05:50.911 07:52:42 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:50.911 07:52:42 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.911 07:52:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.911 07:52:42 -- common/autotest_common.sh@10 -- # set +x 00:05:50.911 ************************************ 00:05:50.911 START TEST dpdk_mem_utility 00:05:50.911 ************************************ 00:05:50.911 07:52:42 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:50.911 * Looking for test storage... 00:05:50.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:50.911 07:52:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:50.911 07:52:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1828245 00:05:50.911 07:52:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.911 07:52:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1828245 00:05:50.911 07:52:42 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1828245 ']' 00:05:50.911 07:52:42 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.911 07:52:42 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.911 07:52:42 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.911 07:52:42 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.911 07:52:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:50.911 [2024-07-13 07:52:42.506698] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:50.911 [2024-07-13 07:52:42.506778] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1828245 ] 00:05:50.911 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.911 [2024-07-13 07:52:42.568288] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.170 [2024-07-13 07:52:42.660011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.428 07:52:42 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.428 07:52:42 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:51.428 07:52:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:51.428 07:52:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:51.428 07:52:42 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:51.428 07:52:42 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.428 { 00:05:51.428 "filename": "/tmp/spdk_mem_dump.txt" 00:05:51.428 } 00:05:51.428 07:52:42 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:51.428 07:52:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:51.428 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:51.428 1 heaps totaling size 814.000000 MiB 00:05:51.428 size: 814.000000 MiB heap id: 0 00:05:51.428 end heaps---------- 00:05:51.428 8 mempools totaling size 598.116089 MiB 00:05:51.428 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:51.428 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:51.428 size: 84.521057 MiB name: bdev_io_1828245 00:05:51.428 size: 51.011292 MiB name: evtpool_1828245 00:05:51.428 size: 50.003479 MiB name: msgpool_1828245 00:05:51.428 size: 21.763794 MiB name: PDU_Pool 00:05:51.428 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:51.428 size: 0.026123 MiB name: Session_Pool 00:05:51.428 end mempools------- 00:05:51.428 6 memzones totaling size 4.142822 MiB 00:05:51.428 size: 1.000366 MiB name: RG_ring_0_1828245 00:05:51.428 size: 1.000366 MiB name: RG_ring_1_1828245 00:05:51.428 size: 1.000366 MiB name: RG_ring_4_1828245 00:05:51.428 size: 1.000366 MiB name: RG_ring_5_1828245 00:05:51.428 size: 0.125366 MiB name: RG_ring_2_1828245 00:05:51.428 size: 0.015991 MiB name: RG_ring_3_1828245 00:05:51.428 end memzones------- 00:05:51.428 07:52:42 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:51.428 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:51.428 list of free elements. size: 12.519348 MiB 00:05:51.428 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:51.428 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:51.428 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:51.428 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:51.428 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:51.428 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:51.428 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:51.428 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:51.428 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:51.428 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:51.428 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:51.428 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:51.428 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:51.428 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:51.428 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:51.428 list of standard malloc elements. size: 199.218079 MiB 00:05:51.428 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:51.428 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:51.428 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:51.428 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:51.428 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:51.428 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:51.428 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:51.428 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:51.428 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:51.428 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:51.428 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:51.428 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:51.428 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:51.428 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:51.428 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:51.428 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:51.428 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:51.428 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:51.428 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:51.428 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:51.429 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:51.429 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:51.429 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:51.429 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:51.429 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:51.429 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:51.429 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:51.429 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:51.429 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:51.429 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:51.429 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:51.429 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:51.429 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:51.429 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:51.429 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:51.429 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:51.429 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:51.429 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:51.429 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:51.429 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:51.429 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:51.429 list of memzone associated elements. size: 602.262573 MiB 00:05:51.429 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:51.429 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:51.429 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:51.429 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:51.429 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:51.429 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1828245_0 00:05:51.429 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:51.429 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1828245_0 00:05:51.429 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:51.429 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1828245_0 00:05:51.429 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:51.429 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:51.429 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:51.429 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:51.429 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:51.429 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1828245 00:05:51.429 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:51.429 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1828245 00:05:51.429 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:51.429 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1828245 00:05:51.429 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:51.429 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:51.429 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:51.429 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:51.429 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:51.429 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:51.429 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:51.429 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:51.429 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:51.429 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1828245 00:05:51.429 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:51.429 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1828245 00:05:51.429 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:51.429 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1828245 00:05:51.429 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:51.429 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1828245 00:05:51.429 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:51.429 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1828245 00:05:51.429 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:51.429 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:51.429 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:51.429 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:51.429 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:51.429 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:51.429 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:51.429 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1828245 00:05:51.429 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:51.429 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:51.429 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:51.429 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:51.429 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:51.429 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1828245 00:05:51.429 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:51.429 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:51.429 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:51.429 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1828245 00:05:51.429 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:51.429 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1828245 00:05:51.429 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:51.429 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:51.429 07:52:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:51.429 07:52:43 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1828245 00:05:51.429 07:52:43 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1828245 ']' 00:05:51.429 07:52:43 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1828245 00:05:51.429 07:52:43 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:51.429 07:52:43 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.429 07:52:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1828245 00:05:51.429 07:52:43 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.429 07:52:43 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.429 07:52:43 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1828245' 00:05:51.429 killing process with pid 1828245 00:05:51.429 07:52:43 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1828245 00:05:51.429 07:52:43 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1828245 00:05:51.995 00:05:51.995 real 0m1.053s 00:05:51.995 user 0m1.016s 00:05:51.995 sys 0m0.403s 00:05:51.995 07:52:43 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:51.995 07:52:43 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.995 ************************************ 00:05:51.995 END TEST dpdk_mem_utility 00:05:51.995 ************************************ 00:05:51.995 07:52:43 -- common/autotest_common.sh@1142 -- # return 0 00:05:51.995 07:52:43 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:51.995 07:52:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:51.995 07:52:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.995 07:52:43 -- common/autotest_common.sh@10 -- # set +x 00:05:51.995 ************************************ 00:05:51.995 START TEST event 00:05:51.995 ************************************ 00:05:51.995 07:52:43 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:51.995 * Looking for test storage... 00:05:51.995 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:51.995 07:52:43 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:51.995 07:52:43 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:51.995 07:52:43 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:51.995 07:52:43 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:51.995 07:52:43 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.995 07:52:43 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.995 ************************************ 00:05:51.995 START TEST event_perf 00:05:51.995 ************************************ 00:05:51.995 07:52:43 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:51.995 Running I/O for 1 seconds...[2024-07-13 07:52:43.596092] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:51.995 [2024-07-13 07:52:43.596165] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1828433 ] 00:05:51.995 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.995 [2024-07-13 07:52:43.657550] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.252 [2024-07-13 07:52:43.751038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.252 [2024-07-13 07:52:43.751091] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.252 [2024-07-13 07:52:43.751212] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.252 [2024-07-13 07:52:43.751215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.184 Running I/O for 1 seconds... 00:05:53.184 lcore 0: 230737 00:05:53.184 lcore 1: 230735 00:05:53.184 lcore 2: 230737 00:05:53.184 lcore 3: 230737 00:05:53.184 done. 00:05:53.184 00:05:53.184 real 0m1.251s 00:05:53.184 user 0m4.162s 00:05:53.184 sys 0m0.083s 00:05:53.184 07:52:44 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.184 07:52:44 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:53.184 ************************************ 00:05:53.184 END TEST event_perf 00:05:53.184 ************************************ 00:05:53.184 07:52:44 event -- common/autotest_common.sh@1142 -- # return 0 00:05:53.184 07:52:44 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:53.184 07:52:44 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:53.184 07:52:44 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.184 07:52:44 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.184 ************************************ 00:05:53.184 START TEST event_reactor 00:05:53.184 ************************************ 00:05:53.184 07:52:44 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:53.184 [2024-07-13 07:52:44.892969] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:53.184 [2024-07-13 07:52:44.893030] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1828590 ] 00:05:53.442 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.442 [2024-07-13 07:52:44.955068] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.442 [2024-07-13 07:52:45.049228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.815 test_start 00:05:54.815 oneshot 00:05:54.815 tick 100 00:05:54.815 tick 100 00:05:54.815 tick 250 00:05:54.815 tick 100 00:05:54.815 tick 100 00:05:54.815 tick 100 00:05:54.815 tick 250 00:05:54.815 tick 500 00:05:54.815 tick 100 00:05:54.815 tick 100 00:05:54.815 tick 250 00:05:54.815 tick 100 00:05:54.815 tick 100 00:05:54.815 test_end 00:05:54.815 00:05:54.815 real 0m1.251s 00:05:54.815 user 0m1.166s 00:05:54.815 sys 0m0.080s 00:05:54.815 07:52:46 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.815 07:52:46 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:54.815 ************************************ 00:05:54.815 END TEST event_reactor 00:05:54.815 ************************************ 00:05:54.815 07:52:46 event -- common/autotest_common.sh@1142 -- # return 0 00:05:54.815 07:52:46 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:54.815 07:52:46 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:54.815 07:52:46 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.815 07:52:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.815 ************************************ 00:05:54.815 START TEST event_reactor_perf 00:05:54.815 ************************************ 00:05:54.815 07:52:46 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:54.815 [2024-07-13 07:52:46.193563] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:54.815 [2024-07-13 07:52:46.193625] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1828748 ] 00:05:54.815 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.815 [2024-07-13 07:52:46.256283] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.815 [2024-07-13 07:52:46.349062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.773 test_start 00:05:55.773 test_end 00:05:55.773 Performance: 358860 events per second 00:05:55.773 00:05:55.773 real 0m1.249s 00:05:55.773 user 0m1.164s 00:05:55.773 sys 0m0.081s 00:05:55.773 07:52:47 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.773 07:52:47 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:55.773 ************************************ 00:05:55.773 END TEST event_reactor_perf 00:05:55.773 ************************************ 00:05:55.773 07:52:47 event -- common/autotest_common.sh@1142 -- # return 0 00:05:55.773 07:52:47 event -- event/event.sh@49 -- # uname -s 00:05:55.773 07:52:47 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:55.773 07:52:47 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:55.773 07:52:47 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:55.773 07:52:47 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.773 07:52:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.773 ************************************ 00:05:55.773 START TEST event_scheduler 00:05:55.773 ************************************ 00:05:55.773 07:52:47 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:56.032 * Looking for test storage... 00:05:56.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:56.032 07:52:47 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:56.032 07:52:47 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1828928 00:05:56.032 07:52:47 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:56.032 07:52:47 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:56.032 07:52:47 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1828928 00:05:56.032 07:52:47 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1828928 ']' 00:05:56.032 07:52:47 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.032 07:52:47 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.032 07:52:47 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.032 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.032 07:52:47 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.032 07:52:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.032 [2024-07-13 07:52:47.575303] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:56.032 [2024-07-13 07:52:47.575373] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1828928 ] 00:05:56.032 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.032 [2024-07-13 07:52:47.632221] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:56.032 [2024-07-13 07:52:47.721605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.032 [2024-07-13 07:52:47.721661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.032 [2024-07-13 07:52:47.721728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.032 [2024-07-13 07:52:47.721730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.291 07:52:47 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:56.291 07:52:47 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:56.291 07:52:47 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:56.291 07:52:47 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.291 07:52:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.291 [2024-07-13 07:52:47.794601] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:56.291 [2024-07-13 07:52:47.794627] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:56.291 [2024-07-13 07:52:47.794659] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:56.291 [2024-07-13 07:52:47.794669] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:56.291 [2024-07-13 07:52:47.794679] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:56.291 07:52:47 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.291 07:52:47 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:56.291 07:52:47 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.291 07:52:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.291 [2024-07-13 07:52:47.885516] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:56.291 07:52:47 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.291 07:52:47 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:56.291 07:52:47 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.291 07:52:47 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.291 07:52:47 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:56.291 ************************************ 00:05:56.291 START TEST scheduler_create_thread 00:05:56.291 ************************************ 00:05:56.291 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:56.291 07:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:56.291 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.291 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.291 2 00:05:56.291 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.291 07:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.292 3 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.292 4 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.292 5 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.292 6 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.292 7 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.292 8 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.292 9 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.292 10 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.292 07:52:47 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.292 07:52:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.292 07:52:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:56.292 07:52:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.292 07:52:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.292 07:52:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.292 07:52:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:56.292 07:52:48 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:56.292 07:52:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:56.292 07:52:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.858 07:52:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:56.858 00:05:56.858 real 0m0.591s 00:05:56.858 user 0m0.010s 00:05:56.858 sys 0m0.003s 00:05:56.858 07:52:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.858 07:52:48 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.858 ************************************ 00:05:56.858 END TEST scheduler_create_thread 00:05:56.858 ************************************ 00:05:56.858 07:52:48 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:56.858 07:52:48 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:56.858 07:52:48 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1828928 00:05:56.858 07:52:48 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1828928 ']' 00:05:56.858 07:52:48 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1828928 00:05:56.858 07:52:48 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:56.858 07:52:48 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:56.858 07:52:48 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1828928 00:05:56.858 07:52:48 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:56.858 07:52:48 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:56.858 07:52:48 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1828928' 00:05:56.858 killing process with pid 1828928 00:05:56.858 07:52:48 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1828928 00:05:56.858 07:52:48 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1828928 00:05:57.425 [2024-07-13 07:52:48.981612] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:57.684 00:05:57.684 real 0m1.712s 00:05:57.684 user 0m2.227s 00:05:57.684 sys 0m0.329s 00:05:57.684 07:52:49 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.684 07:52:49 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.684 ************************************ 00:05:57.684 END TEST event_scheduler 00:05:57.684 ************************************ 00:05:57.684 07:52:49 event -- common/autotest_common.sh@1142 -- # return 0 00:05:57.684 07:52:49 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:57.684 07:52:49 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:57.684 07:52:49 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:57.684 07:52:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.684 07:52:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.684 ************************************ 00:05:57.684 START TEST app_repeat 00:05:57.684 ************************************ 00:05:57.684 07:52:49 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:57.684 07:52:49 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.684 07:52:49 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:57.684 07:52:49 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:57.684 07:52:49 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:57.684 07:52:49 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:57.684 07:52:49 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:57.684 07:52:49 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:57.684 07:52:49 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1829242 00:05:57.684 07:52:49 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:57.684 07:52:49 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:57.684 07:52:49 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1829242' 00:05:57.684 Process app_repeat pid: 1829242 00:05:57.684 07:52:49 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:57.684 07:52:49 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:57.684 spdk_app_start Round 0 00:05:57.684 07:52:49 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1829242 /var/tmp/spdk-nbd.sock 00:05:57.684 07:52:49 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1829242 ']' 00:05:57.684 07:52:49 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:57.684 07:52:49 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.684 07:52:49 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:57.684 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:57.684 07:52:49 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.684 07:52:49 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:57.684 [2024-07-13 07:52:49.272078] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:05:57.684 [2024-07-13 07:52:49.272146] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1829242 ] 00:05:57.684 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.684 [2024-07-13 07:52:49.334760] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.943 [2024-07-13 07:52:49.425437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.943 [2024-07-13 07:52:49.425442] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.943 07:52:49 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.943 07:52:49 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:57.943 07:52:49 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.202 Malloc0 00:05:58.202 07:52:49 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:58.460 Malloc1 00:05:58.460 07:52:50 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.460 07:52:50 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.460 07:52:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.460 07:52:50 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.460 07:52:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.460 07:52:50 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.460 07:52:50 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:58.460 07:52:50 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.460 07:52:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:58.460 07:52:50 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.460 07:52:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:58.460 07:52:50 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.460 07:52:50 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:58.460 07:52:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.460 07:52:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.460 07:52:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:58.716 /dev/nbd0 00:05:58.716 07:52:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.716 07:52:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.716 07:52:50 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:58.716 07:52:50 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:58.716 07:52:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:58.716 07:52:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:58.716 07:52:50 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:58.716 07:52:50 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:58.716 07:52:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:58.716 07:52:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:58.716 07:52:50 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.716 1+0 records in 00:05:58.716 1+0 records out 00:05:58.716 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174535 s, 23.5 MB/s 00:05:58.716 07:52:50 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.716 07:52:50 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:58.716 07:52:50 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.716 07:52:50 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:58.716 07:52:50 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:58.716 07:52:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.716 07:52:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.716 07:52:50 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:58.972 /dev/nbd1 00:05:58.972 07:52:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:58.972 07:52:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:58.972 07:52:50 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:58.972 07:52:50 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:58.972 07:52:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:58.972 07:52:50 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:58.972 07:52:50 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:58.972 07:52:50 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:58.972 07:52:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:58.972 07:52:50 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:58.972 07:52:50 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:58.972 1+0 records in 00:05:58.972 1+0 records out 00:05:58.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192803 s, 21.2 MB/s 00:05:58.972 07:52:50 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.972 07:52:50 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:58.972 07:52:50 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:58.972 07:52:50 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:58.972 07:52:50 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:58.972 07:52:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.972 07:52:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:58.972 07:52:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.972 07:52:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.972 07:52:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:59.229 07:52:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:59.229 { 00:05:59.229 "nbd_device": "/dev/nbd0", 00:05:59.229 "bdev_name": "Malloc0" 00:05:59.229 }, 00:05:59.229 { 00:05:59.229 "nbd_device": "/dev/nbd1", 00:05:59.229 "bdev_name": "Malloc1" 00:05:59.229 } 00:05:59.229 ]' 00:05:59.229 07:52:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:59.229 { 00:05:59.229 "nbd_device": "/dev/nbd0", 00:05:59.229 "bdev_name": "Malloc0" 00:05:59.229 }, 00:05:59.229 { 00:05:59.229 "nbd_device": "/dev/nbd1", 00:05:59.229 "bdev_name": "Malloc1" 00:05:59.229 } 00:05:59.229 ]' 00:05:59.229 07:52:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:59.229 07:52:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:59.229 /dev/nbd1' 00:05:59.229 07:52:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:59.229 /dev/nbd1' 00:05:59.230 07:52:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:59.230 07:52:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:59.230 07:52:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:59.230 07:52:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:59.230 07:52:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:59.230 07:52:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:59.230 07:52:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.230 07:52:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.230 07:52:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:59.230 07:52:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.230 07:52:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:59.230 07:52:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:59.230 256+0 records in 00:05:59.230 256+0 records out 00:05:59.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490847 s, 214 MB/s 00:05:59.230 07:52:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.230 07:52:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:59.230 256+0 records in 00:05:59.230 256+0 records out 00:05:59.230 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197343 s, 53.1 MB/s 00:05:59.230 07:52:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:59.230 07:52:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:59.488 256+0 records in 00:05:59.488 256+0 records out 00:05:59.488 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0255983 s, 41.0 MB/s 00:05:59.488 07:52:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:59.488 07:52:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.488 07:52:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:59.488 07:52:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:59.488 07:52:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.488 07:52:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:59.488 07:52:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:59.488 07:52:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.488 07:52:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:59.488 07:52:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:59.488 07:52:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:59.488 07:52:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:59.488 07:52:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:59.488 07:52:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.488 07:52:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.488 07:52:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:59.488 07:52:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:59.488 07:52:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.488 07:52:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:59.745 07:52:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:59.745 07:52:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:59.745 07:52:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:59.745 07:52:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:59.745 07:52:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:59.745 07:52:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:59.745 07:52:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:59.745 07:52:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:59.745 07:52:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:59.745 07:52:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.003 07:52:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.003 07:52:51 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.003 07:52:51 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.003 07:52:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.003 07:52:51 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.003 07:52:51 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.003 07:52:51 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.003 07:52:51 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.003 07:52:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.003 07:52:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.003 07:52:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.261 07:52:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.261 07:52:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.261 07:52:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.261 07:52:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.261 07:52:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.261 07:52:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.261 07:52:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:00.261 07:52:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.261 07:52:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.261 07:52:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.261 07:52:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.261 07:52:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.261 07:52:51 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:00.519 07:52:52 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:00.777 [2024-07-13 07:52:52.353557] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.777 [2024-07-13 07:52:52.444115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.777 [2024-07-13 07:52:52.444115] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.777 [2024-07-13 07:52:52.504382] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:00.777 [2024-07-13 07:52:52.504455] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.051 07:52:55 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.051 07:52:55 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:04.051 spdk_app_start Round 1 00:06:04.051 07:52:55 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1829242 /var/tmp/spdk-nbd.sock 00:06:04.051 07:52:55 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1829242 ']' 00:06:04.051 07:52:55 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.051 07:52:55 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.051 07:52:55 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.051 07:52:55 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.051 07:52:55 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.051 07:52:55 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.051 07:52:55 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:04.051 07:52:55 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.051 Malloc0 00:06:04.051 07:52:55 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.308 Malloc1 00:06:04.308 07:52:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.308 07:52:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.308 07:52:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.308 07:52:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.308 07:52:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.308 07:52:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.308 07:52:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.308 07:52:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.308 07:52:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.308 07:52:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.308 07:52:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.308 07:52:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.308 07:52:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:04.308 07:52:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.308 07:52:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.308 07:52:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:04.565 /dev/nbd0 00:06:04.565 07:52:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:04.565 07:52:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:04.565 07:52:56 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:04.565 07:52:56 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:04.565 07:52:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:04.565 07:52:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:04.565 07:52:56 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:04.565 07:52:56 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:04.565 07:52:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:04.565 07:52:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:04.565 07:52:56 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.565 1+0 records in 00:06:04.565 1+0 records out 00:06:04.565 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202838 s, 20.2 MB/s 00:06:04.565 07:52:56 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.565 07:52:56 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:04.565 07:52:56 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.565 07:52:56 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:04.565 07:52:56 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:04.565 07:52:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.565 07:52:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.565 07:52:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:04.822 /dev/nbd1 00:06:04.822 07:52:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:04.822 07:52:56 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:04.822 07:52:56 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:04.822 07:52:56 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:04.822 07:52:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:04.822 07:52:56 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:04.822 07:52:56 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:04.822 07:52:56 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:04.822 07:52:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:04.822 07:52:56 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:04.822 07:52:56 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:04.822 1+0 records in 00:06:04.822 1+0 records out 00:06:04.822 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000201082 s, 20.4 MB/s 00:06:04.822 07:52:56 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.822 07:52:56 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:04.822 07:52:56 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:04.822 07:52:56 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:04.822 07:52:56 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:04.822 07:52:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:04.822 07:52:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.822 07:52:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:04.822 07:52:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.822 07:52:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.080 { 00:06:05.080 "nbd_device": "/dev/nbd0", 00:06:05.080 "bdev_name": "Malloc0" 00:06:05.080 }, 00:06:05.080 { 00:06:05.080 "nbd_device": "/dev/nbd1", 00:06:05.080 "bdev_name": "Malloc1" 00:06:05.080 } 00:06:05.080 ]' 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.080 { 00:06:05.080 "nbd_device": "/dev/nbd0", 00:06:05.080 "bdev_name": "Malloc0" 00:06:05.080 }, 00:06:05.080 { 00:06:05.080 "nbd_device": "/dev/nbd1", 00:06:05.080 "bdev_name": "Malloc1" 00:06:05.080 } 00:06:05.080 ]' 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.080 /dev/nbd1' 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.080 /dev/nbd1' 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.080 256+0 records in 00:06:05.080 256+0 records out 00:06:05.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00467203 s, 224 MB/s 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.080 256+0 records in 00:06:05.080 256+0 records out 00:06:05.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023513 s, 44.6 MB/s 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.080 256+0 records in 00:06:05.080 256+0 records out 00:06:05.080 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0223362 s, 46.9 MB/s 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.080 07:52:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.338 07:52:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.338 07:52:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.338 07:52:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.338 07:52:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.338 07:52:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.338 07:52:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.338 07:52:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.338 07:52:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.338 07:52:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.338 07:52:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:05.595 07:52:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:05.853 07:52:57 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:05.853 07:52:57 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:05.853 07:52:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.853 07:52:57 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.853 07:52:57 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:05.853 07:52:57 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.853 07:52:57 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.853 07:52:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.853 07:52:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.853 07:52:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.853 07:52:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:05.853 07:52:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:05.853 07:52:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.110 07:52:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.110 07:52:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.110 07:52:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.110 07:52:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.110 07:52:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.110 07:52:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.110 07:52:57 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.110 07:52:57 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.110 07:52:57 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.110 07:52:57 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.368 07:52:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:06.625 [2024-07-13 07:52:58.120129] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.625 [2024-07-13 07:52:58.211809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.625 [2024-07-13 07:52:58.211812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.625 [2024-07-13 07:52:58.271076] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:06.625 [2024-07-13 07:52:58.271142] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:09.905 07:53:00 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:09.905 07:53:00 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:09.905 spdk_app_start Round 2 00:06:09.905 07:53:00 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1829242 /var/tmp/spdk-nbd.sock 00:06:09.905 07:53:00 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1829242 ']' 00:06:09.905 07:53:00 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:09.905 07:53:00 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.905 07:53:00 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:09.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:09.905 07:53:00 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.905 07:53:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:09.905 07:53:01 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.905 07:53:01 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:09.905 07:53:01 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:09.905 Malloc0 00:06:09.905 07:53:01 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.167 Malloc1 00:06:10.167 07:53:01 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.167 07:53:01 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.167 07:53:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.167 07:53:01 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.167 07:53:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.167 07:53:01 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.167 07:53:01 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.167 07:53:01 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.167 07:53:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.167 07:53:01 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.167 07:53:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.167 07:53:01 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.167 07:53:01 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.167 07:53:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.167 07:53:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.167 07:53:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.425 /dev/nbd0 00:06:10.425 07:53:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.425 07:53:01 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.425 07:53:01 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:10.425 07:53:01 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:10.425 07:53:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:10.425 07:53:01 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:10.425 07:53:01 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:10.425 07:53:01 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:10.425 07:53:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:10.425 07:53:01 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:10.425 07:53:01 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.425 1+0 records in 00:06:10.425 1+0 records out 00:06:10.425 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000199408 s, 20.5 MB/s 00:06:10.425 07:53:01 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.425 07:53:01 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:10.425 07:53:01 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.425 07:53:01 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:10.425 07:53:01 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:10.425 07:53:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.425 07:53:01 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.425 07:53:01 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.683 /dev/nbd1 00:06:10.683 07:53:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.683 07:53:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.683 07:53:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:10.683 07:53:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:10.683 07:53:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:10.683 07:53:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:10.683 07:53:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:10.683 07:53:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:10.683 07:53:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:10.683 07:53:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:10.683 07:53:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.683 1+0 records in 00:06:10.683 1+0 records out 00:06:10.683 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202745 s, 20.2 MB/s 00:06:10.683 07:53:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.683 07:53:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:10.683 07:53:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.683 07:53:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:10.683 07:53:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:10.683 07:53:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.683 07:53:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.683 07:53:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.683 07:53:02 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.683 07:53:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.941 { 00:06:10.941 "nbd_device": "/dev/nbd0", 00:06:10.941 "bdev_name": "Malloc0" 00:06:10.941 }, 00:06:10.941 { 00:06:10.941 "nbd_device": "/dev/nbd1", 00:06:10.941 "bdev_name": "Malloc1" 00:06:10.941 } 00:06:10.941 ]' 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.941 { 00:06:10.941 "nbd_device": "/dev/nbd0", 00:06:10.941 "bdev_name": "Malloc0" 00:06:10.941 }, 00:06:10.941 { 00:06:10.941 "nbd_device": "/dev/nbd1", 00:06:10.941 "bdev_name": "Malloc1" 00:06:10.941 } 00:06:10.941 ]' 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.941 /dev/nbd1' 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.941 /dev/nbd1' 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:10.941 256+0 records in 00:06:10.941 256+0 records out 00:06:10.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00497914 s, 211 MB/s 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:10.941 256+0 records in 00:06:10.941 256+0 records out 00:06:10.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0198026 s, 53.0 MB/s 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:10.941 256+0 records in 00:06:10.941 256+0 records out 00:06:10.941 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243015 s, 43.1 MB/s 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.941 07:53:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.199 07:53:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.199 07:53:02 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.199 07:53:02 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.199 07:53:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.199 07:53:02 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.199 07:53:02 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.199 07:53:02 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.199 07:53:02 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.199 07:53:02 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.199 07:53:02 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.455 07:53:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.455 07:53:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.455 07:53:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.455 07:53:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.455 07:53:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.455 07:53:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.455 07:53:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.455 07:53:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.455 07:53:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.455 07:53:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.455 07:53:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.712 07:53:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.712 07:53:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.712 07:53:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.712 07:53:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.712 07:53:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.712 07:53:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.712 07:53:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:11.712 07:53:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.712 07:53:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.712 07:53:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.712 07:53:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.712 07:53:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.712 07:53:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:11.970 07:53:03 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.226 [2024-07-13 07:53:03.932561] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.483 [2024-07-13 07:53:04.028657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.483 [2024-07-13 07:53:04.028658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.483 [2024-07-13 07:53:04.091721] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.483 [2024-07-13 07:53:04.091803] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.004 07:53:06 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1829242 /var/tmp/spdk-nbd.sock 00:06:15.004 07:53:06 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1829242 ']' 00:06:15.004 07:53:06 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.004 07:53:06 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.004 07:53:06 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.004 07:53:06 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.004 07:53:06 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.260 07:53:06 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.260 07:53:06 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:15.260 07:53:06 event.app_repeat -- event/event.sh@39 -- # killprocess 1829242 00:06:15.260 07:53:06 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1829242 ']' 00:06:15.260 07:53:06 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1829242 00:06:15.260 07:53:06 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:15.260 07:53:06 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:15.260 07:53:06 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1829242 00:06:15.518 07:53:06 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:15.518 07:53:06 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:15.518 07:53:06 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1829242' 00:06:15.518 killing process with pid 1829242 00:06:15.518 07:53:06 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1829242 00:06:15.518 07:53:06 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1829242 00:06:15.518 spdk_app_start is called in Round 0. 00:06:15.518 Shutdown signal received, stop current app iteration 00:06:15.518 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:06:15.518 spdk_app_start is called in Round 1. 00:06:15.518 Shutdown signal received, stop current app iteration 00:06:15.518 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:06:15.518 spdk_app_start is called in Round 2. 00:06:15.518 Shutdown signal received, stop current app iteration 00:06:15.518 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 reinitialization... 00:06:15.518 spdk_app_start is called in Round 3. 00:06:15.518 Shutdown signal received, stop current app iteration 00:06:15.518 07:53:07 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:15.518 07:53:07 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:15.518 00:06:15.518 real 0m17.960s 00:06:15.518 user 0m39.175s 00:06:15.518 sys 0m3.202s 00:06:15.518 07:53:07 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.518 07:53:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.518 ************************************ 00:06:15.518 END TEST app_repeat 00:06:15.518 ************************************ 00:06:15.518 07:53:07 event -- common/autotest_common.sh@1142 -- # return 0 00:06:15.518 07:53:07 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:15.518 07:53:07 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:15.518 07:53:07 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.518 07:53:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.518 07:53:07 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.775 ************************************ 00:06:15.775 START TEST cpu_locks 00:06:15.775 ************************************ 00:06:15.775 07:53:07 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:15.775 * Looking for test storage... 00:06:15.775 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:15.775 07:53:07 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:15.776 07:53:07 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:15.776 07:53:07 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:15.776 07:53:07 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:15.776 07:53:07 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:15.776 07:53:07 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.776 07:53:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.776 ************************************ 00:06:15.776 START TEST default_locks 00:06:15.776 ************************************ 00:06:15.776 07:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:15.776 07:53:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1831588 00:06:15.776 07:53:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:15.776 07:53:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1831588 00:06:15.776 07:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1831588 ']' 00:06:15.776 07:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.776 07:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.776 07:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.776 07:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.776 07:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.776 [2024-07-13 07:53:07.386041] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:15.776 [2024-07-13 07:53:07.386135] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1831588 ] 00:06:15.776 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.776 [2024-07-13 07:53:07.444684] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.033 [2024-07-13 07:53:07.531748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.290 07:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.290 07:53:07 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:16.290 07:53:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1831588 00:06:16.290 07:53:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1831588 00:06:16.290 07:53:07 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.549 lslocks: write error 00:06:16.549 07:53:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1831588 00:06:16.549 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1831588 ']' 00:06:16.549 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1831588 00:06:16.549 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:16.549 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:16.549 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1831588 00:06:16.549 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:16.549 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:16.549 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1831588' 00:06:16.549 killing process with pid 1831588 00:06:16.549 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1831588 00:06:16.549 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1831588 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1831588 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1831588 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1831588 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1831588 ']' 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1831588) - No such process 00:06:16.807 ERROR: process (pid: 1831588) is no longer running 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:16.807 00:06:16.807 real 0m1.150s 00:06:16.807 user 0m1.073s 00:06:16.807 sys 0m0.521s 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.807 07:53:08 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.807 ************************************ 00:06:16.808 END TEST default_locks 00:06:16.808 ************************************ 00:06:16.808 07:53:08 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:16.808 07:53:08 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:16.808 07:53:08 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.808 07:53:08 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.808 07:53:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:16.808 ************************************ 00:06:16.808 START TEST default_locks_via_rpc 00:06:16.808 ************************************ 00:06:16.808 07:53:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:16.808 07:53:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1831754 00:06:16.808 07:53:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:16.808 07:53:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1831754 00:06:16.808 07:53:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1831754 ']' 00:06:16.808 07:53:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.808 07:53:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.808 07:53:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.808 07:53:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.808 07:53:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.066 [2024-07-13 07:53:08.581026] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:17.066 [2024-07-13 07:53:08.581122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1831754 ] 00:06:17.066 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.066 [2024-07-13 07:53:08.643000] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.066 [2024-07-13 07:53:08.737564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.324 07:53:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:17.324 07:53:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:17.324 07:53:08 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:17.324 07:53:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.324 07:53:08 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.324 07:53:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.324 07:53:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:17.324 07:53:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:17.324 07:53:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:17.324 07:53:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:17.324 07:53:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:17.324 07:53:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:17.324 07:53:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:17.324 07:53:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.324 07:53:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1831754 00:06:17.324 07:53:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1831754 00:06:17.324 07:53:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:17.582 07:53:09 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1831754 00:06:17.582 07:53:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1831754 ']' 00:06:17.582 07:53:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1831754 00:06:17.582 07:53:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:17.582 07:53:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.582 07:53:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1831754 00:06:17.582 07:53:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:17.582 07:53:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:17.582 07:53:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1831754' 00:06:17.582 killing process with pid 1831754 00:06:17.582 07:53:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1831754 00:06:17.582 07:53:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1831754 00:06:18.147 00:06:18.147 real 0m1.162s 00:06:18.147 user 0m1.097s 00:06:18.147 sys 0m0.548s 00:06:18.147 07:53:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.147 07:53:09 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:18.147 ************************************ 00:06:18.147 END TEST default_locks_via_rpc 00:06:18.147 ************************************ 00:06:18.147 07:53:09 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:18.147 07:53:09 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:18.147 07:53:09 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.147 07:53:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.147 07:53:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:18.147 ************************************ 00:06:18.147 START TEST non_locking_app_on_locked_coremask 00:06:18.147 ************************************ 00:06:18.147 07:53:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:18.147 07:53:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1831916 00:06:18.147 07:53:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:18.147 07:53:09 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1831916 /var/tmp/spdk.sock 00:06:18.147 07:53:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1831916 ']' 00:06:18.147 07:53:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:18.147 07:53:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.147 07:53:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:18.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:18.147 07:53:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.147 07:53:09 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.147 [2024-07-13 07:53:09.793522] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:18.147 [2024-07-13 07:53:09.793622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1831916 ] 00:06:18.147 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.147 [2024-07-13 07:53:09.858501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.405 [2024-07-13 07:53:09.954392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.663 07:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.663 07:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:18.663 07:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1831992 00:06:18.663 07:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:18.663 07:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1831992 /var/tmp/spdk2.sock 00:06:18.663 07:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1831992 ']' 00:06:18.663 07:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.663 07:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.663 07:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.663 07:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.663 07:53:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.663 [2024-07-13 07:53:10.267101] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:18.663 [2024-07-13 07:53:10.267197] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1831992 ] 00:06:18.663 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.663 [2024-07-13 07:53:10.361334] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:18.663 [2024-07-13 07:53:10.361367] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.922 [2024-07-13 07:53:10.545751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.488 07:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:19.488 07:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:19.488 07:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1831916 00:06:19.488 07:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1831916 00:06:19.488 07:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.052 lslocks: write error 00:06:20.052 07:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1831916 00:06:20.052 07:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1831916 ']' 00:06:20.052 07:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1831916 00:06:20.052 07:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:20.052 07:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.052 07:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1831916 00:06:20.052 07:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.052 07:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.052 07:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1831916' 00:06:20.052 killing process with pid 1831916 00:06:20.052 07:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1831916 00:06:20.052 07:53:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1831916 00:06:20.984 07:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1831992 00:06:20.984 07:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1831992 ']' 00:06:20.984 07:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1831992 00:06:20.984 07:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:20.984 07:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.984 07:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1831992 00:06:20.984 07:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.984 07:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.984 07:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1831992' 00:06:20.984 killing process with pid 1831992 00:06:20.984 07:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1831992 00:06:20.984 07:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1831992 00:06:21.243 00:06:21.243 real 0m3.155s 00:06:21.243 user 0m3.299s 00:06:21.243 sys 0m1.059s 00:06:21.243 07:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.243 07:53:12 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.243 ************************************ 00:06:21.243 END TEST non_locking_app_on_locked_coremask 00:06:21.243 ************************************ 00:06:21.243 07:53:12 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:21.243 07:53:12 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:21.243 07:53:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.243 07:53:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.243 07:53:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.243 ************************************ 00:06:21.243 START TEST locking_app_on_unlocked_coremask 00:06:21.243 ************************************ 00:06:21.243 07:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:21.243 07:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1832346 00:06:21.243 07:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:21.243 07:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1832346 /var/tmp/spdk.sock 00:06:21.243 07:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1832346 ']' 00:06:21.243 07:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.243 07:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.243 07:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.243 07:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.243 07:53:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.502 [2024-07-13 07:53:12.998063] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:21.502 [2024-07-13 07:53:12.998163] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1832346 ] 00:06:21.502 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.502 [2024-07-13 07:53:13.060186] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.502 [2024-07-13 07:53:13.060229] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.502 [2024-07-13 07:53:13.154403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.760 07:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.760 07:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:21.760 07:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1832360 00:06:21.760 07:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:21.760 07:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1832360 /var/tmp/spdk2.sock 00:06:21.760 07:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1832360 ']' 00:06:21.760 07:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.760 07:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.760 07:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.760 07:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.760 07:53:13 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.760 [2024-07-13 07:53:13.459298] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:21.760 [2024-07-13 07:53:13.459387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1832360 ] 00:06:21.760 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.017 [2024-07-13 07:53:13.554907] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.017 [2024-07-13 07:53:13.739590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.955 07:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.955 07:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:22.955 07:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1832360 00:06:22.955 07:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1832360 00:06:22.955 07:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.214 lslocks: write error 00:06:23.214 07:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1832346 00:06:23.214 07:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1832346 ']' 00:06:23.214 07:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1832346 00:06:23.214 07:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:23.214 07:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:23.214 07:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1832346 00:06:23.214 07:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:23.214 07:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:23.214 07:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1832346' 00:06:23.214 killing process with pid 1832346 00:06:23.214 07:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1832346 00:06:23.214 07:53:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1832346 00:06:24.147 07:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1832360 00:06:24.147 07:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1832360 ']' 00:06:24.147 07:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1832360 00:06:24.147 07:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:24.147 07:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:24.147 07:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1832360 00:06:24.147 07:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:24.147 07:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:24.147 07:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1832360' 00:06:24.147 killing process with pid 1832360 00:06:24.147 07:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1832360 00:06:24.147 07:53:15 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1832360 00:06:24.405 00:06:24.405 real 0m3.121s 00:06:24.405 user 0m3.284s 00:06:24.405 sys 0m1.048s 00:06:24.405 07:53:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.405 07:53:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.405 ************************************ 00:06:24.405 END TEST locking_app_on_unlocked_coremask 00:06:24.405 ************************************ 00:06:24.405 07:53:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:24.405 07:53:16 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:24.405 07:53:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:24.405 07:53:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.405 07:53:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:24.405 ************************************ 00:06:24.405 START TEST locking_app_on_locked_coremask 00:06:24.405 ************************************ 00:06:24.405 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:24.405 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1832782 00:06:24.405 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.405 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1832782 /var/tmp/spdk.sock 00:06:24.405 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1832782 ']' 00:06:24.405 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.405 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.405 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.405 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.405 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.663 [2024-07-13 07:53:16.174461] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:24.663 [2024-07-13 07:53:16.174570] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1832782 ] 00:06:24.663 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.663 [2024-07-13 07:53:16.238004] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.663 [2024-07-13 07:53:16.326902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.921 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.921 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:24.921 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1832794 00:06:24.921 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:24.921 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1832794 /var/tmp/spdk2.sock 00:06:24.921 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:24.921 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1832794 /var/tmp/spdk2.sock 00:06:24.921 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:24.921 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.921 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:24.921 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:24.921 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1832794 /var/tmp/spdk2.sock 00:06:24.921 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1832794 ']' 00:06:24.921 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.921 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.921 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.921 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.921 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.921 07:53:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.921 [2024-07-13 07:53:16.630620] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:24.921 [2024-07-13 07:53:16.630702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1832794 ] 00:06:25.178 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.178 [2024-07-13 07:53:16.725493] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1832782 has claimed it. 00:06:25.178 [2024-07-13 07:53:16.725550] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:25.742 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1832794) - No such process 00:06:25.742 ERROR: process (pid: 1832794) is no longer running 00:06:25.742 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:25.742 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:25.742 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:25.742 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:25.742 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:25.743 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:25.743 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1832782 00:06:25.743 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1832782 00:06:25.743 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.000 lslocks: write error 00:06:26.000 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1832782 00:06:26.000 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1832782 ']' 00:06:26.000 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1832782 00:06:26.000 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:26.000 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:26.000 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1832782 00:06:26.000 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:26.000 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:26.000 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1832782' 00:06:26.000 killing process with pid 1832782 00:06:26.000 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1832782 00:06:26.000 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1832782 00:06:26.258 00:06:26.258 real 0m1.869s 00:06:26.258 user 0m1.996s 00:06:26.258 sys 0m0.623s 00:06:26.258 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.258 07:53:17 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.258 ************************************ 00:06:26.258 END TEST locking_app_on_locked_coremask 00:06:26.258 ************************************ 00:06:26.517 07:53:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:26.517 07:53:18 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:26.517 07:53:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:26.517 07:53:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.517 07:53:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.517 ************************************ 00:06:26.517 START TEST locking_overlapped_coremask 00:06:26.517 ************************************ 00:06:26.517 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:26.517 07:53:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1832993 00:06:26.517 07:53:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:26.517 07:53:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1832993 /var/tmp/spdk.sock 00:06:26.517 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1832993 ']' 00:06:26.517 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.517 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.517 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.517 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.517 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.517 [2024-07-13 07:53:18.090918] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:26.517 [2024-07-13 07:53:18.091016] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1832993 ] 00:06:26.517 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.517 [2024-07-13 07:53:18.150439] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:26.517 [2024-07-13 07:53:18.237149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.517 [2024-07-13 07:53:18.237207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.517 [2024-07-13 07:53:18.237210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.792 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.792 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:26.792 07:53:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1833092 00:06:26.792 07:53:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1833092 /var/tmp/spdk2.sock 00:06:26.792 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:26.792 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1833092 /var/tmp/spdk2.sock 00:06:26.792 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:26.792 07:53:18 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:26.792 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.792 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:26.792 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:26.792 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1833092 /var/tmp/spdk2.sock 00:06:26.792 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1833092 ']' 00:06:26.792 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.792 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.792 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.792 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.792 07:53:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.048 [2024-07-13 07:53:18.537533] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:27.048 [2024-07-13 07:53:18.537617] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1833092 ] 00:06:27.048 EAL: No free 2048 kB hugepages reported on node 1 00:06:27.048 [2024-07-13 07:53:18.624385] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1832993 has claimed it. 00:06:27.048 [2024-07-13 07:53:18.624445] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:27.613 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1833092) - No such process 00:06:27.613 ERROR: process (pid: 1833092) is no longer running 00:06:27.613 07:53:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.613 07:53:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:27.613 07:53:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:27.613 07:53:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:27.613 07:53:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:27.613 07:53:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:27.613 07:53:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:27.613 07:53:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:27.613 07:53:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:27.613 07:53:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:27.613 07:53:19 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1832993 00:06:27.613 07:53:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1832993 ']' 00:06:27.613 07:53:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1832993 00:06:27.613 07:53:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:27.613 07:53:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:27.613 07:53:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1832993 00:06:27.613 07:53:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:27.613 07:53:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:27.613 07:53:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1832993' 00:06:27.613 killing process with pid 1832993 00:06:27.613 07:53:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1832993 00:06:27.613 07:53:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1832993 00:06:28.178 00:06:28.178 real 0m1.606s 00:06:28.178 user 0m4.342s 00:06:28.178 sys 0m0.444s 00:06:28.178 07:53:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.178 07:53:19 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.178 ************************************ 00:06:28.178 END TEST locking_overlapped_coremask 00:06:28.178 ************************************ 00:06:28.178 07:53:19 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:28.178 07:53:19 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:28.178 07:53:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:28.178 07:53:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.178 07:53:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:28.178 ************************************ 00:06:28.178 START TEST locking_overlapped_coremask_via_rpc 00:06:28.178 ************************************ 00:06:28.178 07:53:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:28.178 07:53:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1833254 00:06:28.178 07:53:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:28.178 07:53:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1833254 /var/tmp/spdk.sock 00:06:28.178 07:53:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1833254 ']' 00:06:28.178 07:53:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.178 07:53:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.178 07:53:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.178 07:53:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.178 07:53:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.178 [2024-07-13 07:53:19.749500] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:28.178 [2024-07-13 07:53:19.749598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1833254 ] 00:06:28.178 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.178 [2024-07-13 07:53:19.807295] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.178 [2024-07-13 07:53:19.807333] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.178 [2024-07-13 07:53:19.897379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.178 [2024-07-13 07:53:19.897445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.178 [2024-07-13 07:53:19.897448] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.436 07:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:28.436 07:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:28.436 07:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1833263 00:06:28.436 07:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1833263 /var/tmp/spdk2.sock 00:06:28.436 07:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:28.436 07:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1833263 ']' 00:06:28.436 07:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.436 07:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.436 07:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.436 07:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.436 07:53:20 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.693 [2024-07-13 07:53:20.197930] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:28.693 [2024-07-13 07:53:20.198013] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1833263 ] 00:06:28.693 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.693 [2024-07-13 07:53:20.285175] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.693 [2024-07-13 07:53:20.285211] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.952 [2024-07-13 07:53:20.462464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.952 [2024-07-13 07:53:20.465909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:28.952 [2024-07-13 07:53:20.465911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.516 [2024-07-13 07:53:21.163961] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1833254 has claimed it. 00:06:29.516 request: 00:06:29.516 { 00:06:29.516 "method": "framework_enable_cpumask_locks", 00:06:29.516 "req_id": 1 00:06:29.516 } 00:06:29.516 Got JSON-RPC error response 00:06:29.516 response: 00:06:29.516 { 00:06:29.516 "code": -32603, 00:06:29.516 "message": "Failed to claim CPU core: 2" 00:06:29.516 } 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1833254 /var/tmp/spdk.sock 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1833254 ']' 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.516 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.774 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.774 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:29.774 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1833263 /var/tmp/spdk2.sock 00:06:29.774 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1833263 ']' 00:06:29.774 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.774 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.774 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.774 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.774 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.032 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.032 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:30.032 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:30.032 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:30.032 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:30.032 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:30.032 00:06:30.032 real 0m1.982s 00:06:30.032 user 0m1.025s 00:06:30.032 sys 0m0.192s 00:06:30.032 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.032 07:53:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.032 ************************************ 00:06:30.032 END TEST locking_overlapped_coremask_via_rpc 00:06:30.032 ************************************ 00:06:30.032 07:53:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:30.032 07:53:21 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:30.032 07:53:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1833254 ]] 00:06:30.032 07:53:21 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1833254 00:06:30.032 07:53:21 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1833254 ']' 00:06:30.032 07:53:21 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1833254 00:06:30.032 07:53:21 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:30.032 07:53:21 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.032 07:53:21 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1833254 00:06:30.032 07:53:21 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:30.032 07:53:21 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:30.032 07:53:21 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1833254' 00:06:30.032 killing process with pid 1833254 00:06:30.032 07:53:21 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1833254 00:06:30.032 07:53:21 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1833254 00:06:30.598 07:53:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1833263 ]] 00:06:30.598 07:53:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1833263 00:06:30.598 07:53:22 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1833263 ']' 00:06:30.598 07:53:22 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1833263 00:06:30.598 07:53:22 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:30.598 07:53:22 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.598 07:53:22 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1833263 00:06:30.598 07:53:22 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:30.598 07:53:22 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:30.598 07:53:22 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1833263' 00:06:30.598 killing process with pid 1833263 00:06:30.598 07:53:22 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1833263 00:06:30.598 07:53:22 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1833263 00:06:30.855 07:53:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:30.855 07:53:22 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:30.855 07:53:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1833254 ]] 00:06:30.855 07:53:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1833254 00:06:30.856 07:53:22 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1833254 ']' 00:06:30.856 07:53:22 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1833254 00:06:30.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1833254) - No such process 00:06:30.856 07:53:22 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1833254 is not found' 00:06:30.856 Process with pid 1833254 is not found 00:06:30.856 07:53:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1833263 ]] 00:06:30.856 07:53:22 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1833263 00:06:30.856 07:53:22 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1833263 ']' 00:06:30.856 07:53:22 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1833263 00:06:30.856 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1833263) - No such process 00:06:30.856 07:53:22 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1833263 is not found' 00:06:30.856 Process with pid 1833263 is not found 00:06:30.856 07:53:22 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:30.856 00:06:30.856 real 0m15.318s 00:06:30.856 user 0m27.010s 00:06:30.856 sys 0m5.347s 00:06:30.856 07:53:22 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.856 07:53:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.856 ************************************ 00:06:30.856 END TEST cpu_locks 00:06:30.856 ************************************ 00:06:31.113 07:53:22 event -- common/autotest_common.sh@1142 -- # return 0 00:06:31.113 00:06:31.113 real 0m39.092s 00:06:31.113 user 1m15.032s 00:06:31.113 sys 0m9.366s 00:06:31.113 07:53:22 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.113 07:53:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:31.113 ************************************ 00:06:31.113 END TEST event 00:06:31.113 ************************************ 00:06:31.113 07:53:22 -- common/autotest_common.sh@1142 -- # return 0 00:06:31.113 07:53:22 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:31.113 07:53:22 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.113 07:53:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.113 07:53:22 -- common/autotest_common.sh@10 -- # set +x 00:06:31.113 ************************************ 00:06:31.113 START TEST thread 00:06:31.113 ************************************ 00:06:31.113 07:53:22 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:31.113 * Looking for test storage... 00:06:31.113 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:31.113 07:53:22 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:31.113 07:53:22 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:31.113 07:53:22 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.113 07:53:22 thread -- common/autotest_common.sh@10 -- # set +x 00:06:31.113 ************************************ 00:06:31.114 START TEST thread_poller_perf 00:06:31.114 ************************************ 00:06:31.114 07:53:22 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:31.114 [2024-07-13 07:53:22.722119] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:31.114 [2024-07-13 07:53:22.722196] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1833752 ] 00:06:31.114 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.114 [2024-07-13 07:53:22.786341] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.371 [2024-07-13 07:53:22.876510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.371 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:32.301 ====================================== 00:06:32.301 busy:2713366655 (cyc) 00:06:32.301 total_run_count: 292000 00:06:32.301 tsc_hz: 2700000000 (cyc) 00:06:32.301 ====================================== 00:06:32.301 poller_cost: 9292 (cyc), 3441 (nsec) 00:06:32.301 00:06:32.301 real 0m1.258s 00:06:32.301 user 0m1.167s 00:06:32.301 sys 0m0.085s 00:06:32.301 07:53:23 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.301 07:53:23 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:32.301 ************************************ 00:06:32.301 END TEST thread_poller_perf 00:06:32.301 ************************************ 00:06:32.301 07:53:23 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:32.301 07:53:23 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.302 07:53:23 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:32.302 07:53:23 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.302 07:53:23 thread -- common/autotest_common.sh@10 -- # set +x 00:06:32.302 ************************************ 00:06:32.302 START TEST thread_poller_perf 00:06:32.302 ************************************ 00:06:32.302 07:53:24 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:32.302 [2024-07-13 07:53:24.028146] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:32.302 [2024-07-13 07:53:24.028210] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1833907 ] 00:06:32.560 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.560 [2024-07-13 07:53:24.089442] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.560 [2024-07-13 07:53:24.182509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.560 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:33.934 ====================================== 00:06:33.934 busy:2702889626 (cyc) 00:06:33.934 total_run_count: 3919000 00:06:33.934 tsc_hz: 2700000000 (cyc) 00:06:33.934 ====================================== 00:06:33.934 poller_cost: 689 (cyc), 255 (nsec) 00:06:33.934 00:06:33.934 real 0m1.251s 00:06:33.934 user 0m1.159s 00:06:33.934 sys 0m0.087s 00:06:33.934 07:53:25 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.934 07:53:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:33.934 ************************************ 00:06:33.934 END TEST thread_poller_perf 00:06:33.934 ************************************ 00:06:33.934 07:53:25 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:33.934 07:53:25 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:33.934 00:06:33.934 real 0m2.647s 00:06:33.934 user 0m2.382s 00:06:33.934 sys 0m0.264s 00:06:33.934 07:53:25 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:33.934 07:53:25 thread -- common/autotest_common.sh@10 -- # set +x 00:06:33.934 ************************************ 00:06:33.934 END TEST thread 00:06:33.934 ************************************ 00:06:33.934 07:53:25 -- common/autotest_common.sh@1142 -- # return 0 00:06:33.934 07:53:25 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:33.934 07:53:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.934 07:53:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.935 07:53:25 -- common/autotest_common.sh@10 -- # set +x 00:06:33.935 ************************************ 00:06:33.935 START TEST accel 00:06:33.935 ************************************ 00:06:33.935 07:53:25 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:33.935 * Looking for test storage... 00:06:33.935 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:33.935 07:53:25 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:33.935 07:53:25 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:33.935 07:53:25 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:33.935 07:53:25 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1834098 00:06:33.935 07:53:25 accel -- accel/accel.sh@63 -- # waitforlisten 1834098 00:06:33.935 07:53:25 accel -- common/autotest_common.sh@829 -- # '[' -z 1834098 ']' 00:06:33.935 07:53:25 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:33.935 07:53:25 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.935 07:53:25 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:33.935 07:53:25 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.935 07:53:25 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:33.935 07:53:25 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.935 07:53:25 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:33.935 07:53:25 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.935 07:53:25 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:33.935 07:53:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:33.935 07:53:25 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:33.935 07:53:25 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:33.935 07:53:25 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:33.935 07:53:25 accel -- accel/accel.sh@41 -- # jq -r . 00:06:33.935 [2024-07-13 07:53:25.436785] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:33.935 [2024-07-13 07:53:25.436903] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1834098 ] 00:06:33.935 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.935 [2024-07-13 07:53:25.496381] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.935 [2024-07-13 07:53:25.586198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.193 07:53:25 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.193 07:53:25 accel -- common/autotest_common.sh@862 -- # return 0 00:06:34.193 07:53:25 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:34.193 07:53:25 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:34.193 07:53:25 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:34.193 07:53:25 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:34.193 07:53:25 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:34.193 07:53:25 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:34.193 07:53:25 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.193 07:53:25 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:34.193 07:53:25 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.193 07:53:25 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.193 07:53:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.193 07:53:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.193 07:53:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.193 07:53:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.193 07:53:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.193 07:53:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.193 07:53:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.193 07:53:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.193 07:53:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.193 07:53:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.193 07:53:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.193 07:53:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.193 07:53:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.193 07:53:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.193 07:53:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.193 07:53:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.193 07:53:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.193 07:53:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.193 07:53:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.193 07:53:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.193 07:53:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.193 07:53:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.193 07:53:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.193 07:53:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.193 07:53:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.193 07:53:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.193 07:53:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.193 07:53:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.193 07:53:25 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # IFS== 00:06:34.193 07:53:25 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:34.193 07:53:25 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:34.193 07:53:25 accel -- accel/accel.sh@75 -- # killprocess 1834098 00:06:34.193 07:53:25 accel -- common/autotest_common.sh@948 -- # '[' -z 1834098 ']' 00:06:34.193 07:53:25 accel -- common/autotest_common.sh@952 -- # kill -0 1834098 00:06:34.193 07:53:25 accel -- common/autotest_common.sh@953 -- # uname 00:06:34.193 07:53:25 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.193 07:53:25 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1834098 00:06:34.193 07:53:25 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:34.194 07:53:25 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:34.194 07:53:25 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1834098' 00:06:34.194 killing process with pid 1834098 00:06:34.194 07:53:25 accel -- common/autotest_common.sh@967 -- # kill 1834098 00:06:34.194 07:53:25 accel -- common/autotest_common.sh@972 -- # wait 1834098 00:06:34.759 07:53:26 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:34.759 07:53:26 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:34.759 07:53:26 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:34.759 07:53:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.759 07:53:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.759 07:53:26 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:34.759 07:53:26 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:34.759 07:53:26 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:34.759 07:53:26 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.759 07:53:26 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.759 07:53:26 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.759 07:53:26 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.759 07:53:26 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.759 07:53:26 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:34.759 07:53:26 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:34.759 07:53:26 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.759 07:53:26 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:34.759 07:53:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:34.759 07:53:26 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:34.759 07:53:26 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:34.759 07:53:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.759 07:53:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:34.759 ************************************ 00:06:34.759 START TEST accel_missing_filename 00:06:34.759 ************************************ 00:06:34.759 07:53:26 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:34.759 07:53:26 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:34.759 07:53:26 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:34.759 07:53:26 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:34.759 07:53:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.759 07:53:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:34.759 07:53:26 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.759 07:53:26 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:34.759 07:53:26 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:34.759 07:53:26 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:34.759 07:53:26 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:34.759 07:53:26 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:34.759 07:53:26 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.759 07:53:26 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.759 07:53:26 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:34.759 07:53:26 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:34.759 07:53:26 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:34.759 [2024-07-13 07:53:26.408792] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:34.759 [2024-07-13 07:53:26.408862] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1834266 ] 00:06:34.759 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.759 [2024-07-13 07:53:26.474479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.017 [2024-07-13 07:53:26.568970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.017 [2024-07-13 07:53:26.631007] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.018 [2024-07-13 07:53:26.717730] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:35.310 A filename is required. 00:06:35.310 07:53:26 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:35.310 07:53:26 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:35.310 07:53:26 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:35.310 07:53:26 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:35.310 07:53:26 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:35.310 07:53:26 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:35.310 00:06:35.310 real 0m0.412s 00:06:35.310 user 0m0.295s 00:06:35.310 sys 0m0.153s 00:06:35.310 07:53:26 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.310 07:53:26 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:35.310 ************************************ 00:06:35.310 END TEST accel_missing_filename 00:06:35.310 ************************************ 00:06:35.310 07:53:26 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.310 07:53:26 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.310 07:53:26 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:35.310 07:53:26 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.310 07:53:26 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.310 ************************************ 00:06:35.310 START TEST accel_compress_verify 00:06:35.310 ************************************ 00:06:35.310 07:53:26 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.310 07:53:26 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:35.310 07:53:26 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.310 07:53:26 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:35.310 07:53:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.310 07:53:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:35.310 07:53:26 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.310 07:53:26 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.310 07:53:26 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:35.310 07:53:26 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:35.310 07:53:26 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.310 07:53:26 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.310 07:53:26 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.310 07:53:26 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.310 07:53:26 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.310 07:53:26 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:35.310 07:53:26 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:35.310 [2024-07-13 07:53:26.872452] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:35.310 [2024-07-13 07:53:26.872520] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1834295 ] 00:06:35.310 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.310 [2024-07-13 07:53:26.938136] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.569 [2024-07-13 07:53:27.029982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.569 [2024-07-13 07:53:27.089283] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:35.569 [2024-07-13 07:53:27.167201] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:35.569 00:06:35.569 Compression does not support the verify option, aborting. 00:06:35.569 07:53:27 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:35.569 07:53:27 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:35.569 07:53:27 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:35.569 07:53:27 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:35.569 07:53:27 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:35.569 07:53:27 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:35.569 00:06:35.569 real 0m0.391s 00:06:35.569 user 0m0.275s 00:06:35.569 sys 0m0.150s 00:06:35.569 07:53:27 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.569 07:53:27 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:35.569 ************************************ 00:06:35.569 END TEST accel_compress_verify 00:06:35.569 ************************************ 00:06:35.569 07:53:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.569 07:53:27 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:35.569 07:53:27 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:35.569 07:53:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.569 07:53:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.569 ************************************ 00:06:35.569 START TEST accel_wrong_workload 00:06:35.569 ************************************ 00:06:35.569 07:53:27 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:35.569 07:53:27 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:35.569 07:53:27 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:35.569 07:53:27 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:35.569 07:53:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.569 07:53:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:35.569 07:53:27 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.569 07:53:27 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:35.569 07:53:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:35.569 07:53:27 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:35.569 07:53:27 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.569 07:53:27 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.569 07:53:27 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.569 07:53:27 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.569 07:53:27 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.569 07:53:27 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:35.569 07:53:27 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:35.569 Unsupported workload type: foobar 00:06:35.569 [2024-07-13 07:53:27.301606] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:35.828 accel_perf options: 00:06:35.828 [-h help message] 00:06:35.828 [-q queue depth per core] 00:06:35.828 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:35.828 [-T number of threads per core 00:06:35.828 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:35.828 [-t time in seconds] 00:06:35.828 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:35.828 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:35.828 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:35.828 [-l for compress/decompress workloads, name of uncompressed input file 00:06:35.828 [-S for crc32c workload, use this seed value (default 0) 00:06:35.828 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:35.828 [-f for fill workload, use this BYTE value (default 255) 00:06:35.828 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:35.828 [-y verify result if this switch is on] 00:06:35.828 [-a tasks to allocate per core (default: same value as -q)] 00:06:35.828 Can be used to spread operations across a wider range of memory. 00:06:35.828 07:53:27 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:35.828 07:53:27 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:35.828 07:53:27 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:35.828 07:53:27 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:35.828 00:06:35.828 real 0m0.020s 00:06:35.828 user 0m0.015s 00:06:35.828 sys 0m0.005s 00:06:35.828 07:53:27 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.828 07:53:27 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:35.828 ************************************ 00:06:35.828 END TEST accel_wrong_workload 00:06:35.828 ************************************ 00:06:35.828 Error: writing output failed: Broken pipe 00:06:35.828 07:53:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.828 07:53:27 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:35.828 07:53:27 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:35.828 07:53:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.828 07:53:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.828 ************************************ 00:06:35.828 START TEST accel_negative_buffers 00:06:35.828 ************************************ 00:06:35.828 07:53:27 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:35.828 07:53:27 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:35.828 07:53:27 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:35.828 07:53:27 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:35.828 07:53:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.828 07:53:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:35.828 07:53:27 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.828 07:53:27 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:35.828 07:53:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:35.828 07:53:27 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:35.828 07:53:27 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.828 07:53:27 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.828 07:53:27 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.828 07:53:27 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.828 07:53:27 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.828 07:53:27 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:35.828 07:53:27 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:35.828 -x option must be non-negative. 00:06:35.828 [2024-07-13 07:53:27.374707] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:35.828 accel_perf options: 00:06:35.828 [-h help message] 00:06:35.828 [-q queue depth per core] 00:06:35.828 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:35.828 [-T number of threads per core 00:06:35.828 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:35.828 [-t time in seconds] 00:06:35.828 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:35.828 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:35.828 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:35.828 [-l for compress/decompress workloads, name of uncompressed input file 00:06:35.828 [-S for crc32c workload, use this seed value (default 0) 00:06:35.828 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:35.828 [-f for fill workload, use this BYTE value (default 255) 00:06:35.828 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:35.828 [-y verify result if this switch is on] 00:06:35.828 [-a tasks to allocate per core (default: same value as -q)] 00:06:35.828 Can be used to spread operations across a wider range of memory. 00:06:35.828 07:53:27 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:35.828 07:53:27 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:35.828 07:53:27 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:35.828 07:53:27 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:35.828 00:06:35.828 real 0m0.024s 00:06:35.828 user 0m0.016s 00:06:35.828 sys 0m0.008s 00:06:35.828 07:53:27 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.828 07:53:27 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:35.828 ************************************ 00:06:35.828 END TEST accel_negative_buffers 00:06:35.828 ************************************ 00:06:35.828 Error: writing output failed: Broken pipe 00:06:35.828 07:53:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:35.828 07:53:27 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:35.828 07:53:27 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:35.828 07:53:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.828 07:53:27 accel -- common/autotest_common.sh@10 -- # set +x 00:06:35.828 ************************************ 00:06:35.828 START TEST accel_crc32c 00:06:35.828 ************************************ 00:06:35.828 07:53:27 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:35.828 07:53:27 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:35.828 07:53:27 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:35.828 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:35.828 07:53:27 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:35.828 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:35.828 07:53:27 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:35.828 07:53:27 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:35.828 07:53:27 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:35.828 07:53:27 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:35.828 07:53:27 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.828 07:53:27 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.828 07:53:27 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:35.828 07:53:27 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:35.828 07:53:27 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:35.828 [2024-07-13 07:53:27.440672] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:35.828 [2024-07-13 07:53:27.440737] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1834480 ] 00:06:35.828 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.828 [2024-07-13 07:53:27.497182] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.087 [2024-07-13 07:53:27.584009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:36.087 07:53:27 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:37.461 07:53:28 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:37.461 00:06:37.461 real 0m1.391s 00:06:37.461 user 0m1.260s 00:06:37.461 sys 0m0.134s 00:06:37.461 07:53:28 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.461 07:53:28 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:37.461 ************************************ 00:06:37.461 END TEST accel_crc32c 00:06:37.461 ************************************ 00:06:37.461 07:53:28 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:37.461 07:53:28 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:37.461 07:53:28 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:37.461 07:53:28 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.461 07:53:28 accel -- common/autotest_common.sh@10 -- # set +x 00:06:37.461 ************************************ 00:06:37.461 START TEST accel_crc32c_C2 00:06:37.461 ************************************ 00:06:37.461 07:53:28 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:37.461 07:53:28 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:37.461 07:53:28 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:37.461 07:53:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.461 07:53:28 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:37.461 07:53:28 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.461 07:53:28 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:37.461 07:53:28 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.461 07:53:28 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:37.461 07:53:28 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:37.461 07:53:28 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.461 07:53:28 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.461 07:53:28 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:37.461 07:53:28 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:37.461 07:53:28 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:37.461 [2024-07-13 07:53:28.875134] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:37.461 [2024-07-13 07:53:28.875207] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1834637 ] 00:06:37.462 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.462 [2024-07-13 07:53:28.935479] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.462 [2024-07-13 07:53:29.028529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:37.462 07:53:29 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.836 00:06:38.836 real 0m1.404s 00:06:38.836 user 0m1.266s 00:06:38.836 sys 0m0.141s 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.836 07:53:30 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:38.836 ************************************ 00:06:38.836 END TEST accel_crc32c_C2 00:06:38.836 ************************************ 00:06:38.836 07:53:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:38.836 07:53:30 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:38.836 07:53:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:38.836 07:53:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.836 07:53:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.836 ************************************ 00:06:38.836 START TEST accel_copy 00:06:38.836 ************************************ 00:06:38.836 07:53:30 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:38.836 07:53:30 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:38.836 07:53:30 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:38.836 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.836 07:53:30 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:38.836 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.836 07:53:30 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:38.836 07:53:30 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:38.836 07:53:30 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.836 07:53:30 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.836 07:53:30 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.836 07:53:30 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.836 07:53:30 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.836 07:53:30 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:38.836 07:53:30 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:38.836 [2024-07-13 07:53:30.326752] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:38.836 [2024-07-13 07:53:30.326817] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1834790 ] 00:06:38.836 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.836 [2024-07-13 07:53:30.389383] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.836 [2024-07-13 07:53:30.482957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.836 07:53:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.836 07:53:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.836 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.836 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.836 07:53:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:38.837 07:53:30 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:40.211 07:53:31 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:40.211 00:06:40.211 real 0m1.399s 00:06:40.211 user 0m1.246s 00:06:40.211 sys 0m0.155s 00:06:40.211 07:53:31 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.211 07:53:31 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:40.211 ************************************ 00:06:40.211 END TEST accel_copy 00:06:40.211 ************************************ 00:06:40.211 07:53:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.211 07:53:31 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.211 07:53:31 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:40.211 07:53:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.211 07:53:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.211 ************************************ 00:06:40.211 START TEST accel_fill 00:06:40.211 ************************************ 00:06:40.211 07:53:31 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.211 07:53:31 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:40.211 07:53:31 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:40.211 07:53:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.211 07:53:31 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.211 07:53:31 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.211 07:53:31 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:40.211 07:53:31 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:40.211 07:53:31 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.211 07:53:31 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.211 07:53:31 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.211 07:53:31 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.211 07:53:31 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.211 07:53:31 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:40.211 07:53:31 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:40.211 [2024-07-13 07:53:31.783328] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:40.211 [2024-07-13 07:53:31.783388] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1835076 ] 00:06:40.211 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.211 [2024-07-13 07:53:31.846142] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.211 [2024-07-13 07:53:31.939378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.469 07:53:31 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.469 07:53:31 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.469 07:53:31 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.469 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.469 07:53:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.469 07:53:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.469 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.469 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.469 07:53:32 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:40.469 07:53:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.469 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.469 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.469 07:53:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.469 07:53:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.469 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.469 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.469 07:53:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.469 07:53:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.469 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.469 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.469 07:53:32 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:40.470 07:53:32 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:41.842 07:53:33 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.842 00:06:41.842 real 0m1.410s 00:06:41.842 user 0m1.257s 00:06:41.842 sys 0m0.155s 00:06:41.842 07:53:33 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.842 07:53:33 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:41.842 ************************************ 00:06:41.842 END TEST accel_fill 00:06:41.842 ************************************ 00:06:41.842 07:53:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:41.842 07:53:33 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:41.842 07:53:33 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:41.842 07:53:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.842 07:53:33 accel -- common/autotest_common.sh@10 -- # set +x 00:06:41.842 ************************************ 00:06:41.842 START TEST accel_copy_crc32c 00:06:41.842 ************************************ 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:41.842 [2024-07-13 07:53:33.237519] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:41.842 [2024-07-13 07:53:33.237586] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1835239 ] 00:06:41.842 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.842 [2024-07-13 07:53:33.297843] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.842 [2024-07-13 07:53:33.390958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.842 07:53:33 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.214 00:06:43.214 real 0m1.401s 00:06:43.214 user 0m1.258s 00:06:43.214 sys 0m0.146s 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.214 07:53:34 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:43.214 ************************************ 00:06:43.214 END TEST accel_copy_crc32c 00:06:43.214 ************************************ 00:06:43.214 07:53:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.214 07:53:34 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:43.214 07:53:34 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:43.214 07:53:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.214 07:53:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.214 ************************************ 00:06:43.214 START TEST accel_copy_crc32c_C2 00:06:43.214 ************************************ 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:43.214 [2024-07-13 07:53:34.681316] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:43.214 [2024-07-13 07:53:34.681384] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1835396 ] 00:06:43.214 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.214 [2024-07-13 07:53:34.741770] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.214 [2024-07-13 07:53:34.835267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.214 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.215 07:53:34 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.590 00:06:44.590 real 0m1.403s 00:06:44.590 user 0m1.256s 00:06:44.590 sys 0m0.150s 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.590 07:53:36 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:44.590 ************************************ 00:06:44.590 END TEST accel_copy_crc32c_C2 00:06:44.590 ************************************ 00:06:44.590 07:53:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.590 07:53:36 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:44.590 07:53:36 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:44.590 07:53:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.590 07:53:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.590 ************************************ 00:06:44.590 START TEST accel_dualcast 00:06:44.590 ************************************ 00:06:44.590 07:53:36 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:44.590 07:53:36 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:44.590 07:53:36 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:44.590 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.590 07:53:36 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:44.590 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.590 07:53:36 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:44.590 07:53:36 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:44.590 07:53:36 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.590 07:53:36 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.590 07:53:36 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.590 07:53:36 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.590 07:53:36 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.590 07:53:36 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:44.590 07:53:36 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:44.590 [2024-07-13 07:53:36.135134] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:44.590 [2024-07-13 07:53:36.135208] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1835582 ] 00:06:44.590 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.590 [2024-07-13 07:53:36.198177] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.590 [2024-07-13 07:53:36.290074] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.849 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.850 07:53:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:44.850 07:53:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.850 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.850 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.850 07:53:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.850 07:53:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.850 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.850 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.850 07:53:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:44.850 07:53:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.850 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.850 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.850 07:53:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.850 07:53:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.850 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.850 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:44.850 07:53:36 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:44.850 07:53:36 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:44.850 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:44.850 07:53:36 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:45.856 07:53:37 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:45.856 00:06:45.856 real 0m1.393s 00:06:45.856 user 0m1.254s 00:06:45.856 sys 0m0.140s 00:06:45.856 07:53:37 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:45.856 07:53:37 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:45.856 ************************************ 00:06:45.856 END TEST accel_dualcast 00:06:45.856 ************************************ 00:06:45.856 07:53:37 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:45.856 07:53:37 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:45.856 07:53:37 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:45.856 07:53:37 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.856 07:53:37 accel -- common/autotest_common.sh@10 -- # set +x 00:06:45.856 ************************************ 00:06:45.856 START TEST accel_compare 00:06:45.856 ************************************ 00:06:45.856 07:53:37 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:45.856 07:53:37 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:45.856 07:53:37 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:45.856 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:45.856 07:53:37 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:45.856 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:45.856 07:53:37 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:45.856 07:53:37 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:45.856 07:53:37 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:45.856 07:53:37 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:45.856 07:53:37 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.856 07:53:37 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.856 07:53:37 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:45.856 07:53:37 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:45.856 07:53:37 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:45.856 [2024-07-13 07:53:37.577496] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:45.856 [2024-07-13 07:53:37.577561] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1835825 ] 00:06:46.114 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.114 [2024-07-13 07:53:37.639816] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.114 [2024-07-13 07:53:37.733098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.114 07:53:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:46.114 07:53:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.114 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.114 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.114 07:53:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:46.114 07:53:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.114 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:46.115 07:53:37 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:47.490 07:53:38 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.490 00:06:47.490 real 0m1.410s 00:06:47.490 user 0m1.267s 00:06:47.490 sys 0m0.146s 00:06:47.490 07:53:38 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.490 07:53:38 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:47.490 ************************************ 00:06:47.490 END TEST accel_compare 00:06:47.490 ************************************ 00:06:47.490 07:53:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.490 07:53:38 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:47.490 07:53:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:47.490 07:53:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.490 07:53:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.490 ************************************ 00:06:47.490 START TEST accel_xor 00:06:47.490 ************************************ 00:06:47.490 07:53:39 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:47.490 07:53:39 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:47.490 07:53:39 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:47.490 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.490 07:53:39 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:47.490 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.490 07:53:39 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:47.490 07:53:39 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:47.490 07:53:39 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.490 07:53:39 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.490 07:53:39 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.490 07:53:39 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.490 07:53:39 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.490 07:53:39 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:47.490 07:53:39 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:47.490 [2024-07-13 07:53:39.033008] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:47.490 [2024-07-13 07:53:39.033072] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1835984 ] 00:06:47.490 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.490 [2024-07-13 07:53:39.097671] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.490 [2024-07-13 07:53:39.191661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.749 07:53:39 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:47.750 07:53:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.750 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.750 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.750 07:53:39 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.750 07:53:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.750 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.750 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.750 07:53:39 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:47.750 07:53:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.750 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.750 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.750 07:53:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.750 07:53:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.750 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.750 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:47.750 07:53:39 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:47.750 07:53:39 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:47.750 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:47.750 07:53:39 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:49.124 07:53:40 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:49.124 00:06:49.124 real 0m1.411s 00:06:49.124 user 0m1.263s 00:06:49.124 sys 0m0.150s 00:06:49.124 07:53:40 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:49.124 07:53:40 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:49.124 ************************************ 00:06:49.124 END TEST accel_xor 00:06:49.124 ************************************ 00:06:49.124 07:53:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:49.124 07:53:40 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:49.124 07:53:40 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:49.125 07:53:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.125 07:53:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:49.125 ************************************ 00:06:49.125 START TEST accel_xor 00:06:49.125 ************************************ 00:06:49.125 07:53:40 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:49.125 [2024-07-13 07:53:40.488806] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:49.125 [2024-07-13 07:53:40.488876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836135 ] 00:06:49.125 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.125 [2024-07-13 07:53:40.549661] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.125 [2024-07-13 07:53:40.645009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:49.125 07:53:40 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:50.499 07:53:41 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.499 00:06:50.499 real 0m1.404s 00:06:50.499 user 0m1.262s 00:06:50.499 sys 0m0.144s 00:06:50.499 07:53:41 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.499 07:53:41 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:50.499 ************************************ 00:06:50.499 END TEST accel_xor 00:06:50.499 ************************************ 00:06:50.499 07:53:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.499 07:53:41 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:50.499 07:53:41 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:50.499 07:53:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.499 07:53:41 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.499 ************************************ 00:06:50.499 START TEST accel_dif_verify 00:06:50.499 ************************************ 00:06:50.499 07:53:41 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:50.499 07:53:41 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:50.499 07:53:41 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:50.499 07:53:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.499 07:53:41 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.499 07:53:41 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:50.499 07:53:41 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:50.499 07:53:41 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:50.499 07:53:41 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.499 07:53:41 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.499 07:53:41 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.499 07:53:41 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.499 07:53:41 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.499 07:53:41 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:50.499 07:53:41 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:50.499 [2024-07-13 07:53:41.938781] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:50.499 [2024-07-13 07:53:41.938849] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836413 ] 00:06:50.499 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.499 [2024-07-13 07:53:41.999557] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.499 [2024-07-13 07:53:42.090076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.499 07:53:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.499 07:53:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.499 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.499 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.499 07:53:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.499 07:53:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.499 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.499 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.499 07:53:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:50.499 07:53:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:50.500 07:53:42 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:51.872 07:53:43 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.872 00:06:51.872 real 0m1.388s 00:06:51.872 user 0m1.251s 00:06:51.872 sys 0m0.141s 00:06:51.872 07:53:43 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.872 07:53:43 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:51.872 ************************************ 00:06:51.872 END TEST accel_dif_verify 00:06:51.872 ************************************ 00:06:51.872 07:53:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.872 07:53:43 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:51.872 07:53:43 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:51.872 07:53:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.872 07:53:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.872 ************************************ 00:06:51.872 START TEST accel_dif_generate 00:06:51.872 ************************************ 00:06:51.872 07:53:43 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:51.872 [2024-07-13 07:53:43.369830] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:51.872 [2024-07-13 07:53:43.369904] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836573 ] 00:06:51.872 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.872 [2024-07-13 07:53:43.430502] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.872 [2024-07-13 07:53:43.523600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.872 07:53:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:51.873 07:53:43 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:53.245 07:53:44 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.245 00:06:53.245 real 0m1.409s 00:06:53.245 user 0m1.276s 00:06:53.245 sys 0m0.138s 00:06:53.245 07:53:44 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.245 07:53:44 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:53.245 ************************************ 00:06:53.245 END TEST accel_dif_generate 00:06:53.245 ************************************ 00:06:53.245 07:53:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:53.245 07:53:44 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:53.245 07:53:44 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:53.245 07:53:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.245 07:53:44 accel -- common/autotest_common.sh@10 -- # set +x 00:06:53.245 ************************************ 00:06:53.245 START TEST accel_dif_generate_copy 00:06:53.245 ************************************ 00:06:53.245 07:53:44 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:53.245 07:53:44 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:53.245 07:53:44 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:53.245 07:53:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.245 07:53:44 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:53.245 07:53:44 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.245 07:53:44 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:53.245 07:53:44 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:53.245 07:53:44 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:53.245 07:53:44 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:53.245 07:53:44 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.245 07:53:44 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.245 07:53:44 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:53.245 07:53:44 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:53.245 07:53:44 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:53.245 [2024-07-13 07:53:44.821478] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:53.245 [2024-07-13 07:53:44.821546] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836732 ] 00:06:53.245 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.245 [2024-07-13 07:53:44.882554] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.245 [2024-07-13 07:53:44.976021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:53.502 07:53:45 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.875 00:06:54.875 real 0m1.407s 00:06:54.875 user 0m1.259s 00:06:54.875 sys 0m0.150s 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.875 07:53:46 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:54.875 ************************************ 00:06:54.876 END TEST accel_dif_generate_copy 00:06:54.876 ************************************ 00:06:54.876 07:53:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.876 07:53:46 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:54.876 07:53:46 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.876 07:53:46 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:54.876 07:53:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.876 07:53:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.876 ************************************ 00:06:54.876 START TEST accel_comp 00:06:54.876 ************************************ 00:06:54.876 07:53:46 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:54.876 [2024-07-13 07:53:46.281814] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:54.876 [2024-07-13 07:53:46.281914] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1836883 ] 00:06:54.876 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.876 [2024-07-13 07:53:46.346480] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.876 [2024-07-13 07:53:46.438070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:54.876 07:53:46 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.249 07:53:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:56.249 07:53:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.249 07:53:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.249 07:53:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:56.250 07:53:47 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.250 00:06:56.250 real 0m1.410s 00:06:56.250 user 0m1.255s 00:06:56.250 sys 0m0.157s 00:06:56.250 07:53:47 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.250 07:53:47 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:56.250 ************************************ 00:06:56.250 END TEST accel_comp 00:06:56.250 ************************************ 00:06:56.250 07:53:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.250 07:53:47 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:56.250 07:53:47 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:56.250 07:53:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.250 07:53:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.250 ************************************ 00:06:56.250 START TEST accel_decomp 00:06:56.250 ************************************ 00:06:56.250 07:53:47 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:56.250 [2024-07-13 07:53:47.728883] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:56.250 [2024-07-13 07:53:47.728945] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1837164 ] 00:06:56.250 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.250 [2024-07-13 07:53:47.791131] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.250 [2024-07-13 07:53:47.884252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:56.250 07:53:47 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:56.251 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:56.251 07:53:47 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.625 07:53:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:57.625 07:53:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.625 07:53:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.625 07:53:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.625 07:53:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:57.625 07:53:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.625 07:53:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.625 07:53:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.625 07:53:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:57.625 07:53:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.625 07:53:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.625 07:53:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.625 07:53:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:57.625 07:53:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.625 07:53:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.625 07:53:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.625 07:53:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:57.625 07:53:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.625 07:53:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.625 07:53:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.625 07:53:49 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:57.626 07:53:49 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:57.626 07:53:49 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:57.626 07:53:49 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:57.626 07:53:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.626 07:53:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:57.626 07:53:49 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.626 00:06:57.626 real 0m1.408s 00:06:57.626 user 0m1.262s 00:06:57.626 sys 0m0.150s 00:06:57.626 07:53:49 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.626 07:53:49 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:57.626 ************************************ 00:06:57.626 END TEST accel_decomp 00:06:57.626 ************************************ 00:06:57.626 07:53:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.626 07:53:49 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:57.626 07:53:49 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:57.626 07:53:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.626 07:53:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.626 ************************************ 00:06:57.626 START TEST accel_decomp_full 00:06:57.626 ************************************ 00:06:57.626 07:53:49 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:57.626 07:53:49 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:57.626 07:53:49 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:57.626 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.626 07:53:49 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:57.626 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.626 07:53:49 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:57.626 07:53:49 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:57.626 07:53:49 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.626 07:53:49 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.626 07:53:49 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.626 07:53:49 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.626 07:53:49 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.626 07:53:49 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:57.626 07:53:49 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:57.626 [2024-07-13 07:53:49.182320] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:57.626 [2024-07-13 07:53:49.182388] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1837318 ] 00:06:57.626 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.626 [2024-07-13 07:53:49.244187] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.626 [2024-07-13 07:53:49.337132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.884 07:53:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:57.884 07:53:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.884 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.884 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.884 07:53:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:57.884 07:53:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.884 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.884 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.884 07:53:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:57.884 07:53:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.884 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.884 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.884 07:53:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:57.884 07:53:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.884 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.884 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.884 07:53:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:57.884 07:53:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:57.885 07:53:49 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:59.260 07:53:50 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.260 00:06:59.260 real 0m1.422s 00:06:59.260 user 0m1.282s 00:06:59.260 sys 0m0.143s 00:06:59.260 07:53:50 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.260 07:53:50 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:59.260 ************************************ 00:06:59.260 END TEST accel_decomp_full 00:06:59.260 ************************************ 00:06:59.260 07:53:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.260 07:53:50 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:59.260 07:53:50 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:59.260 07:53:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.260 07:53:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.260 ************************************ 00:06:59.260 START TEST accel_decomp_mcore 00:06:59.260 ************************************ 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:59.260 [2024-07-13 07:53:50.651015] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:06:59.260 [2024-07-13 07:53:50.651089] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1837471 ] 00:06:59.260 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.260 [2024-07-13 07:53:50.714055] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:59.260 [2024-07-13 07:53:50.811063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.260 [2024-07-13 07:53:50.811118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:59.260 [2024-07-13 07:53:50.811181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:59.260 [2024-07-13 07:53:50.811183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:59.260 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:59.261 07:53:50 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:00.636 00:07:00.636 real 0m1.411s 00:07:00.636 user 0m4.712s 00:07:00.636 sys 0m0.150s 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:00.636 07:53:52 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:00.636 ************************************ 00:07:00.636 END TEST accel_decomp_mcore 00:07:00.636 ************************************ 00:07:00.636 07:53:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:00.636 07:53:52 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.636 07:53:52 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:00.636 07:53:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.636 07:53:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:00.636 ************************************ 00:07:00.636 START TEST accel_decomp_full_mcore 00:07:00.636 ************************************ 00:07:00.636 07:53:52 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.636 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:00.636 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:00.636 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.636 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.636 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.636 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:00.636 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:00.636 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:00.636 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:00.636 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.636 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.636 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:00.636 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:00.636 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:00.636 [2024-07-13 07:53:52.109442] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:00.636 [2024-07-13 07:53:52.109503] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1837724 ] 00:07:00.636 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.636 [2024-07-13 07:53:52.170544] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:00.636 [2024-07-13 07:53:52.265774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.636 [2024-07-13 07:53:52.265831] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.636 [2024-07-13 07:53:52.265945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:00.636 [2024-07-13 07:53:52.265949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.636 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.636 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:00.637 07:53:52 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:02.014 07:53:53 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.015 00:07:02.015 real 0m1.409s 00:07:02.015 user 0m4.720s 00:07:02.015 sys 0m0.148s 00:07:02.015 07:53:53 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.015 07:53:53 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:02.015 ************************************ 00:07:02.015 END TEST accel_decomp_full_mcore 00:07:02.015 ************************************ 00:07:02.015 07:53:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.015 07:53:53 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:02.015 07:53:53 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:02.015 07:53:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.015 07:53:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.015 ************************************ 00:07:02.015 START TEST accel_decomp_mthread 00:07:02.015 ************************************ 00:07:02.015 07:53:53 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:02.015 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:02.015 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:02.015 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.015 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:02.015 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.015 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:02.015 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:02.015 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.015 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.015 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.015 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.015 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.015 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:02.015 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:02.015 [2024-07-13 07:53:53.562690] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:02.015 [2024-07-13 07:53:53.562755] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1837912 ] 00:07:02.015 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.015 [2024-07-13 07:53:53.623906] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.015 [2024-07-13 07:53:53.717206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:02.273 07:53:53 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.649 00:07:03.649 real 0m1.413s 00:07:03.649 user 0m1.275s 00:07:03.649 sys 0m0.141s 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.649 07:53:54 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:03.649 ************************************ 00:07:03.649 END TEST accel_decomp_mthread 00:07:03.649 ************************************ 00:07:03.649 07:53:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.649 07:53:54 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:03.649 07:53:54 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:03.649 07:53:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.649 07:53:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.649 ************************************ 00:07:03.649 START TEST accel_decomp_full_mthread 00:07:03.649 ************************************ 00:07:03.649 07:53:55 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:03.649 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:03.649 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:03.649 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.649 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:03.649 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.649 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:03.649 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:03.649 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.649 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.649 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.649 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:03.650 [2024-07-13 07:53:55.020406] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:03.650 [2024-07-13 07:53:55.020473] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1838070 ] 00:07:03.650 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.650 [2024-07-13 07:53:55.082338] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.650 [2024-07-13 07:53:55.176032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:03.650 07:53:55 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.022 00:07:05.022 real 0m1.440s 00:07:05.022 user 0m1.296s 00:07:05.022 sys 0m0.148s 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.022 07:53:56 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:05.022 ************************************ 00:07:05.022 END TEST accel_decomp_full_mthread 00:07:05.022 ************************************ 00:07:05.022 07:53:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.022 07:53:56 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:05.022 07:53:56 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:05.022 07:53:56 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:05.022 07:53:56 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:05.022 07:53:56 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.022 07:53:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.022 07:53:56 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.022 07:53:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.022 07:53:56 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.022 07:53:56 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.022 07:53:56 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.022 07:53:56 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:05.022 07:53:56 accel -- accel/accel.sh@41 -- # jq -r . 00:07:05.022 ************************************ 00:07:05.022 START TEST accel_dif_functional_tests 00:07:05.022 ************************************ 00:07:05.022 07:53:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:05.022 [2024-07-13 07:53:56.528093] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:05.022 [2024-07-13 07:53:56.528173] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1838224 ] 00:07:05.022 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.022 [2024-07-13 07:53:56.594950] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:05.022 [2024-07-13 07:53:56.688680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.022 [2024-07-13 07:53:56.688736] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:05.022 [2024-07-13 07:53:56.688739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.281 00:07:05.281 00:07:05.281 CUnit - A unit testing framework for C - Version 2.1-3 00:07:05.281 http://cunit.sourceforge.net/ 00:07:05.281 00:07:05.281 00:07:05.281 Suite: accel_dif 00:07:05.281 Test: verify: DIF generated, GUARD check ...passed 00:07:05.281 Test: verify: DIF generated, APPTAG check ...passed 00:07:05.281 Test: verify: DIF generated, REFTAG check ...passed 00:07:05.281 Test: verify: DIF not generated, GUARD check ...[2024-07-13 07:53:56.780875] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:05.281 passed 00:07:05.281 Test: verify: DIF not generated, APPTAG check ...[2024-07-13 07:53:56.780959] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:05.281 passed 00:07:05.281 Test: verify: DIF not generated, REFTAG check ...[2024-07-13 07:53:56.780996] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:05.281 passed 00:07:05.281 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:05.281 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-13 07:53:56.781069] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:05.281 passed 00:07:05.281 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:05.281 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:05.281 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:05.281 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-13 07:53:56.781235] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:05.281 passed 00:07:05.281 Test: verify copy: DIF generated, GUARD check ...passed 00:07:05.281 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:05.281 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:05.281 Test: verify copy: DIF not generated, GUARD check ...[2024-07-13 07:53:56.781382] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:05.281 passed 00:07:05.281 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-13 07:53:56.781418] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:05.281 passed 00:07:05.281 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-13 07:53:56.781451] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:05.281 passed 00:07:05.281 Test: generate copy: DIF generated, GUARD check ...passed 00:07:05.281 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:05.281 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:05.281 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:05.281 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:05.281 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:05.281 Test: generate copy: iovecs-len validate ...[2024-07-13 07:53:56.781662] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:05.281 passed 00:07:05.281 Test: generate copy: buffer alignment validate ...passed 00:07:05.281 00:07:05.281 Run Summary: Type Total Ran Passed Failed Inactive 00:07:05.281 suites 1 1 n/a 0 0 00:07:05.281 tests 26 26 26 0 0 00:07:05.281 asserts 115 115 115 0 n/a 00:07:05.281 00:07:05.281 Elapsed time = 0.003 seconds 00:07:05.281 00:07:05.281 real 0m0.484s 00:07:05.281 user 0m0.740s 00:07:05.281 sys 0m0.180s 00:07:05.282 07:53:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.282 07:53:56 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:05.282 ************************************ 00:07:05.282 END TEST accel_dif_functional_tests 00:07:05.282 ************************************ 00:07:05.282 07:53:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.282 00:07:05.282 real 0m31.659s 00:07:05.282 user 0m35.004s 00:07:05.282 sys 0m4.601s 00:07:05.282 07:53:56 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.282 07:53:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.282 ************************************ 00:07:05.282 END TEST accel 00:07:05.282 ************************************ 00:07:05.282 07:53:57 -- common/autotest_common.sh@1142 -- # return 0 00:07:05.282 07:53:57 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:05.282 07:53:57 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.282 07:53:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.282 07:53:57 -- common/autotest_common.sh@10 -- # set +x 00:07:05.540 ************************************ 00:07:05.540 START TEST accel_rpc 00:07:05.540 ************************************ 00:07:05.540 07:53:57 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:05.540 * Looking for test storage... 00:07:05.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:05.540 07:53:57 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:05.540 07:53:57 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1838416 00:07:05.540 07:53:57 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:05.540 07:53:57 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1838416 00:07:05.540 07:53:57 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1838416 ']' 00:07:05.540 07:53:57 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.540 07:53:57 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:05.540 07:53:57 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.540 07:53:57 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:05.540 07:53:57 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.540 [2024-07-13 07:53:57.143852] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:05.540 [2024-07-13 07:53:57.143960] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1838416 ] 00:07:05.540 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.540 [2024-07-13 07:53:57.209211] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.801 [2024-07-13 07:53:57.301456] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.801 07:53:57 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:05.801 07:53:57 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:05.801 07:53:57 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:05.801 07:53:57 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:05.801 07:53:57 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:05.801 07:53:57 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:05.801 07:53:57 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:05.801 07:53:57 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:05.801 07:53:57 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.801 07:53:57 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.801 ************************************ 00:07:05.801 START TEST accel_assign_opcode 00:07:05.801 ************************************ 00:07:05.801 07:53:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:05.801 07:53:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:05.801 07:53:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.801 07:53:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:05.801 [2024-07-13 07:53:57.410203] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:05.801 07:53:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.801 07:53:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:05.801 07:53:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.801 07:53:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:05.801 [2024-07-13 07:53:57.418202] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:05.801 07:53:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:05.801 07:53:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:05.801 07:53:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:05.801 07:53:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:06.103 07:53:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.103 07:53:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:06.103 07:53:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:06.103 07:53:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:06.103 07:53:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:06.103 07:53:57 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:06.103 07:53:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:06.103 software 00:07:06.103 00:07:06.103 real 0m0.300s 00:07:06.103 user 0m0.042s 00:07:06.103 sys 0m0.005s 00:07:06.103 07:53:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.103 07:53:57 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:06.103 ************************************ 00:07:06.103 END TEST accel_assign_opcode 00:07:06.103 ************************************ 00:07:06.103 07:53:57 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:06.103 07:53:57 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1838416 00:07:06.103 07:53:57 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1838416 ']' 00:07:06.103 07:53:57 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1838416 00:07:06.103 07:53:57 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:06.103 07:53:57 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:06.103 07:53:57 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1838416 00:07:06.103 07:53:57 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:06.103 07:53:57 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:06.103 07:53:57 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1838416' 00:07:06.103 killing process with pid 1838416 00:07:06.103 07:53:57 accel_rpc -- common/autotest_common.sh@967 -- # kill 1838416 00:07:06.103 07:53:57 accel_rpc -- common/autotest_common.sh@972 -- # wait 1838416 00:07:06.670 00:07:06.670 real 0m1.103s 00:07:06.670 user 0m1.065s 00:07:06.670 sys 0m0.429s 00:07:06.670 07:53:58 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.670 07:53:58 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:06.670 ************************************ 00:07:06.670 END TEST accel_rpc 00:07:06.670 ************************************ 00:07:06.670 07:53:58 -- common/autotest_common.sh@1142 -- # return 0 00:07:06.670 07:53:58 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:06.670 07:53:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:06.670 07:53:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.670 07:53:58 -- common/autotest_common.sh@10 -- # set +x 00:07:06.670 ************************************ 00:07:06.670 START TEST app_cmdline 00:07:06.670 ************************************ 00:07:06.670 07:53:58 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:06.670 * Looking for test storage... 00:07:06.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:06.670 07:53:58 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:06.670 07:53:58 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1838622 00:07:06.670 07:53:58 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:06.670 07:53:58 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1838622 00:07:06.670 07:53:58 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1838622 ']' 00:07:06.670 07:53:58 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.670 07:53:58 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.670 07:53:58 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.670 07:53:58 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.670 07:53:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:06.670 [2024-07-13 07:53:58.288823] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:06.670 [2024-07-13 07:53:58.288940] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1838622 ] 00:07:06.670 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.670 [2024-07-13 07:53:58.345457] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.928 [2024-07-13 07:53:58.430361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.186 07:53:58 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.186 07:53:58 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:07.186 07:53:58 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:07.444 { 00:07:07.444 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:07:07.444 "fields": { 00:07:07.444 "major": 24, 00:07:07.444 "minor": 9, 00:07:07.444 "patch": 0, 00:07:07.444 "suffix": "-pre", 00:07:07.444 "commit": "719d03c6a" 00:07:07.444 } 00:07:07.444 } 00:07:07.444 07:53:58 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:07.444 07:53:58 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:07.444 07:53:58 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:07.444 07:53:58 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:07.444 07:53:58 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:07.444 07:53:58 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.444 07:53:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:07.444 07:53:58 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:07.444 07:53:58 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:07.444 07:53:58 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.444 07:53:58 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:07.444 07:53:58 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:07.444 07:53:58 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:07.444 07:53:58 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:07.444 07:53:58 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:07.444 07:53:58 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.444 07:53:58 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.444 07:53:58 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.444 07:53:58 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.444 07:53:58 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.444 07:53:58 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.444 07:53:58 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.444 07:53:58 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:07.444 07:53:58 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:07.703 request: 00:07:07.703 { 00:07:07.703 "method": "env_dpdk_get_mem_stats", 00:07:07.703 "req_id": 1 00:07:07.703 } 00:07:07.703 Got JSON-RPC error response 00:07:07.703 response: 00:07:07.703 { 00:07:07.703 "code": -32601, 00:07:07.703 "message": "Method not found" 00:07:07.703 } 00:07:07.703 07:53:59 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:07.703 07:53:59 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:07.703 07:53:59 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:07.703 07:53:59 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:07.703 07:53:59 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1838622 00:07:07.703 07:53:59 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1838622 ']' 00:07:07.703 07:53:59 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1838622 00:07:07.703 07:53:59 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:07.703 07:53:59 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:07.703 07:53:59 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1838622 00:07:07.703 07:53:59 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:07.703 07:53:59 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:07.703 07:53:59 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1838622' 00:07:07.703 killing process with pid 1838622 00:07:07.703 07:53:59 app_cmdline -- common/autotest_common.sh@967 -- # kill 1838622 00:07:07.703 07:53:59 app_cmdline -- common/autotest_common.sh@972 -- # wait 1838622 00:07:07.961 00:07:07.961 real 0m1.493s 00:07:07.961 user 0m1.835s 00:07:07.961 sys 0m0.454s 00:07:07.961 07:53:59 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:07.961 07:53:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:07.961 ************************************ 00:07:07.961 END TEST app_cmdline 00:07:07.961 ************************************ 00:07:08.219 07:53:59 -- common/autotest_common.sh@1142 -- # return 0 00:07:08.219 07:53:59 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:08.219 07:53:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.219 07:53:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.219 07:53:59 -- common/autotest_common.sh@10 -- # set +x 00:07:08.219 ************************************ 00:07:08.219 START TEST version 00:07:08.219 ************************************ 00:07:08.219 07:53:59 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:08.219 * Looking for test storage... 00:07:08.219 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:08.219 07:53:59 version -- app/version.sh@17 -- # get_header_version major 00:07:08.219 07:53:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:08.219 07:53:59 version -- app/version.sh@14 -- # cut -f2 00:07:08.219 07:53:59 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.219 07:53:59 version -- app/version.sh@17 -- # major=24 00:07:08.219 07:53:59 version -- app/version.sh@18 -- # get_header_version minor 00:07:08.219 07:53:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:08.219 07:53:59 version -- app/version.sh@14 -- # cut -f2 00:07:08.219 07:53:59 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.219 07:53:59 version -- app/version.sh@18 -- # minor=9 00:07:08.219 07:53:59 version -- app/version.sh@19 -- # get_header_version patch 00:07:08.219 07:53:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:08.219 07:53:59 version -- app/version.sh@14 -- # cut -f2 00:07:08.219 07:53:59 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.219 07:53:59 version -- app/version.sh@19 -- # patch=0 00:07:08.219 07:53:59 version -- app/version.sh@20 -- # get_header_version suffix 00:07:08.219 07:53:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:08.219 07:53:59 version -- app/version.sh@14 -- # tr -d '"' 00:07:08.219 07:53:59 version -- app/version.sh@14 -- # cut -f2 00:07:08.219 07:53:59 version -- app/version.sh@20 -- # suffix=-pre 00:07:08.219 07:53:59 version -- app/version.sh@22 -- # version=24.9 00:07:08.219 07:53:59 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:08.219 07:53:59 version -- app/version.sh@28 -- # version=24.9rc0 00:07:08.219 07:53:59 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:08.219 07:53:59 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:08.219 07:53:59 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:08.219 07:53:59 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:08.219 00:07:08.219 real 0m0.110s 00:07:08.219 user 0m0.052s 00:07:08.219 sys 0m0.079s 00:07:08.219 07:53:59 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.219 07:53:59 version -- common/autotest_common.sh@10 -- # set +x 00:07:08.219 ************************************ 00:07:08.219 END TEST version 00:07:08.219 ************************************ 00:07:08.219 07:53:59 -- common/autotest_common.sh@1142 -- # return 0 00:07:08.219 07:53:59 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:08.219 07:53:59 -- spdk/autotest.sh@198 -- # uname -s 00:07:08.219 07:53:59 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:08.219 07:53:59 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:08.219 07:53:59 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:08.219 07:53:59 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:08.219 07:53:59 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:08.219 07:53:59 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:08.219 07:53:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:08.219 07:53:59 -- common/autotest_common.sh@10 -- # set +x 00:07:08.219 07:53:59 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:08.219 07:53:59 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:08.219 07:53:59 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:08.219 07:53:59 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:08.219 07:53:59 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:08.219 07:53:59 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:08.219 07:53:59 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:08.219 07:53:59 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:08.219 07:53:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.219 07:53:59 -- common/autotest_common.sh@10 -- # set +x 00:07:08.219 ************************************ 00:07:08.219 START TEST nvmf_tcp 00:07:08.219 ************************************ 00:07:08.219 07:53:59 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:08.478 * Looking for test storage... 00:07:08.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:08.478 07:53:59 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.478 07:53:59 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.478 07:53:59 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.478 07:53:59 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.478 07:53:59 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.478 07:53:59 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.478 07:53:59 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:08.478 07:53:59 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:08.478 07:53:59 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:08.478 07:53:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:08.478 07:53:59 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:08.478 07:53:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:08.478 07:53:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.478 07:53:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:08.478 ************************************ 00:07:08.478 START TEST nvmf_example 00:07:08.478 ************************************ 00:07:08.478 07:54:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:08.478 * Looking for test storage... 00:07:08.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:08.479 07:54:00 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.379 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:10.379 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:10.379 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:10.379 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:10.379 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:10.379 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:10.379 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:10.379 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:10.379 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:10.379 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:10.379 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:10.379 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:10.380 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:10.380 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:10.380 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:10.380 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:10.380 07:54:01 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:10.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:10.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:07:10.380 00:07:10.380 --- 10.0.0.2 ping statistics --- 00:07:10.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.380 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:10.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:10.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:07:10.380 00:07:10.380 --- 10.0.0.1 ping statistics --- 00:07:10.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:10.380 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1840634 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1840634 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1840634 ']' 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.380 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.639 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.639 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.639 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:10.639 07:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:10.639 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:10.639 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:10.897 07:54:02 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:10.897 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.088 Initializing NVMe Controllers 00:07:23.088 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:23.088 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:23.088 Initialization complete. Launching workers. 00:07:23.088 ======================================================== 00:07:23.088 Latency(us) 00:07:23.088 Device Information : IOPS MiB/s Average min max 00:07:23.088 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14061.00 54.93 4553.15 889.89 15310.72 00:07:23.088 ======================================================== 00:07:23.088 Total : 14061.00 54.93 4553.15 889.89 15310.72 00:07:23.088 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:23.088 rmmod nvme_tcp 00:07:23.088 rmmod nvme_fabrics 00:07:23.088 rmmod nvme_keyring 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1840634 ']' 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1840634 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1840634 ']' 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1840634 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1840634 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1840634' 00:07:23.088 killing process with pid 1840634 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1840634 00:07:23.088 07:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1840634 00:07:23.088 nvmf threads initialize successfully 00:07:23.088 bdev subsystem init successfully 00:07:23.088 created a nvmf target service 00:07:23.088 create targets's poll groups done 00:07:23.088 all subsystems of target started 00:07:23.088 nvmf target is running 00:07:23.088 all subsystems of target stopped 00:07:23.088 destroy targets's poll groups done 00:07:23.089 destroyed the nvmf target service 00:07:23.089 bdev subsystem finish successfully 00:07:23.089 nvmf threads destroy successfully 00:07:23.089 07:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:23.089 07:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:23.089 07:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:23.089 07:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:23.089 07:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:23.089 07:54:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.089 07:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.089 07:54:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.346 07:54:14 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:23.346 07:54:15 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:23.346 07:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:23.346 07:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.346 00:07:23.346 real 0m15.023s 00:07:23.346 user 0m38.116s 00:07:23.346 sys 0m4.618s 00:07:23.346 07:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:23.346 07:54:15 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:23.346 ************************************ 00:07:23.346 END TEST nvmf_example 00:07:23.346 ************************************ 00:07:23.346 07:54:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:23.346 07:54:15 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:23.346 07:54:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:23.346 07:54:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.346 07:54:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:23.346 ************************************ 00:07:23.346 START TEST nvmf_filesystem 00:07:23.346 ************************************ 00:07:23.346 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:23.607 * Looking for test storage... 00:07:23.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:23.607 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:23.608 #define SPDK_CONFIG_H 00:07:23.608 #define SPDK_CONFIG_APPS 1 00:07:23.608 #define SPDK_CONFIG_ARCH native 00:07:23.608 #undef SPDK_CONFIG_ASAN 00:07:23.608 #undef SPDK_CONFIG_AVAHI 00:07:23.608 #undef SPDK_CONFIG_CET 00:07:23.608 #define SPDK_CONFIG_COVERAGE 1 00:07:23.608 #define SPDK_CONFIG_CROSS_PREFIX 00:07:23.608 #undef SPDK_CONFIG_CRYPTO 00:07:23.608 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:23.608 #undef SPDK_CONFIG_CUSTOMOCF 00:07:23.608 #undef SPDK_CONFIG_DAOS 00:07:23.608 #define SPDK_CONFIG_DAOS_DIR 00:07:23.608 #define SPDK_CONFIG_DEBUG 1 00:07:23.608 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:23.608 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:23.608 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:23.608 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:23.608 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:23.608 #undef SPDK_CONFIG_DPDK_UADK 00:07:23.608 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:23.608 #define SPDK_CONFIG_EXAMPLES 1 00:07:23.608 #undef SPDK_CONFIG_FC 00:07:23.608 #define SPDK_CONFIG_FC_PATH 00:07:23.608 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:23.608 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:23.608 #undef SPDK_CONFIG_FUSE 00:07:23.608 #undef SPDK_CONFIG_FUZZER 00:07:23.608 #define SPDK_CONFIG_FUZZER_LIB 00:07:23.608 #undef SPDK_CONFIG_GOLANG 00:07:23.608 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:23.608 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:23.608 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:23.608 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:23.608 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:23.608 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:23.608 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:23.608 #define SPDK_CONFIG_IDXD 1 00:07:23.608 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:23.608 #undef SPDK_CONFIG_IPSEC_MB 00:07:23.608 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:23.608 #define SPDK_CONFIG_ISAL 1 00:07:23.608 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:23.608 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:23.608 #define SPDK_CONFIG_LIBDIR 00:07:23.608 #undef SPDK_CONFIG_LTO 00:07:23.608 #define SPDK_CONFIG_MAX_LCORES 128 00:07:23.608 #define SPDK_CONFIG_NVME_CUSE 1 00:07:23.608 #undef SPDK_CONFIG_OCF 00:07:23.608 #define SPDK_CONFIG_OCF_PATH 00:07:23.608 #define SPDK_CONFIG_OPENSSL_PATH 00:07:23.608 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:23.608 #define SPDK_CONFIG_PGO_DIR 00:07:23.608 #undef SPDK_CONFIG_PGO_USE 00:07:23.608 #define SPDK_CONFIG_PREFIX /usr/local 00:07:23.608 #undef SPDK_CONFIG_RAID5F 00:07:23.608 #undef SPDK_CONFIG_RBD 00:07:23.608 #define SPDK_CONFIG_RDMA 1 00:07:23.608 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:23.608 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:23.608 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:23.608 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:23.608 #define SPDK_CONFIG_SHARED 1 00:07:23.608 #undef SPDK_CONFIG_SMA 00:07:23.608 #define SPDK_CONFIG_TESTS 1 00:07:23.608 #undef SPDK_CONFIG_TSAN 00:07:23.608 #define SPDK_CONFIG_UBLK 1 00:07:23.608 #define SPDK_CONFIG_UBSAN 1 00:07:23.608 #undef SPDK_CONFIG_UNIT_TESTS 00:07:23.608 #undef SPDK_CONFIG_URING 00:07:23.608 #define SPDK_CONFIG_URING_PATH 00:07:23.608 #undef SPDK_CONFIG_URING_ZNS 00:07:23.608 #undef SPDK_CONFIG_USDT 00:07:23.608 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:23.608 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:23.608 #define SPDK_CONFIG_VFIO_USER 1 00:07:23.608 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:23.608 #define SPDK_CONFIG_VHOST 1 00:07:23.608 #define SPDK_CONFIG_VIRTIO 1 00:07:23.608 #undef SPDK_CONFIG_VTUNE 00:07:23.608 #define SPDK_CONFIG_VTUNE_DIR 00:07:23.608 #define SPDK_CONFIG_WERROR 1 00:07:23.608 #define SPDK_CONFIG_WPDK_DIR 00:07:23.608 #undef SPDK_CONFIG_XNVME 00:07:23.608 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:23.608 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v22.11.4 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:23.609 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1842835 ]] 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1842835 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.ZwMSGZ 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ZwMSGZ/tests/target /tmp/spdk.ZwMSGZ 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=53532389376 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994708992 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8462319616 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941716480 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997352448 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390182912 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398944256 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.610 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996488192 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997356544 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=868352 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199463936 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199468032 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:23.611 * Looking for test storage... 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=53532389376 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=10676912128 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.611 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:23.611 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:23.612 07:54:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:23.612 07:54:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:23.612 07:54:15 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:23.612 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:23.612 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:23.612 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:23.612 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:23.612 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:23.612 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:23.612 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:23.612 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:23.612 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:23.612 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:23.612 07:54:15 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:23.612 07:54:15 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:25.512 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:25.513 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:25.513 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:25.513 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:25.513 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:25.513 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:25.771 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:25.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:25.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.175 ms 00:07:25.771 00:07:25.771 --- 10.0.0.2 ping statistics --- 00:07:25.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.771 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:07:25.771 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:25.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:25.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:07:25.771 00:07:25.771 --- 10.0.0.1 ping statistics --- 00:07:25.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:25.771 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:07:25.771 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:25.771 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:25.771 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:25.772 ************************************ 00:07:25.772 START TEST nvmf_filesystem_no_in_capsule 00:07:25.772 ************************************ 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1844456 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1844456 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1844456 ']' 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.772 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:25.772 [2024-07-13 07:54:17.360579] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:25.772 [2024-07-13 07:54:17.360657] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:25.772 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.772 [2024-07-13 07:54:17.426675] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:26.030 [2024-07-13 07:54:17.521400] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:26.030 [2024-07-13 07:54:17.521449] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:26.030 [2024-07-13 07:54:17.521467] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:26.030 [2024-07-13 07:54:17.521480] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:26.030 [2024-07-13 07:54:17.521492] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:26.030 [2024-07-13 07:54:17.521573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:26.030 [2024-07-13 07:54:17.521640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:26.030 [2024-07-13 07:54:17.521735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:26.030 [2024-07-13 07:54:17.521737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.030 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.030 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:26.030 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:26.030 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:26.030 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.030 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:26.030 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:26.030 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:26.030 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.030 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.030 [2024-07-13 07:54:17.673735] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:26.030 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.030 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:26.030 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.030 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.289 Malloc1 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.289 [2024-07-13 07:54:17.841889] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:26.289 { 00:07:26.289 "name": "Malloc1", 00:07:26.289 "aliases": [ 00:07:26.289 "b19f48cc-8de8-4b8b-9177-7a1a512b88b3" 00:07:26.289 ], 00:07:26.289 "product_name": "Malloc disk", 00:07:26.289 "block_size": 512, 00:07:26.289 "num_blocks": 1048576, 00:07:26.289 "uuid": "b19f48cc-8de8-4b8b-9177-7a1a512b88b3", 00:07:26.289 "assigned_rate_limits": { 00:07:26.289 "rw_ios_per_sec": 0, 00:07:26.289 "rw_mbytes_per_sec": 0, 00:07:26.289 "r_mbytes_per_sec": 0, 00:07:26.289 "w_mbytes_per_sec": 0 00:07:26.289 }, 00:07:26.289 "claimed": true, 00:07:26.289 "claim_type": "exclusive_write", 00:07:26.289 "zoned": false, 00:07:26.289 "supported_io_types": { 00:07:26.289 "read": true, 00:07:26.289 "write": true, 00:07:26.289 "unmap": true, 00:07:26.289 "flush": true, 00:07:26.289 "reset": true, 00:07:26.289 "nvme_admin": false, 00:07:26.289 "nvme_io": false, 00:07:26.289 "nvme_io_md": false, 00:07:26.289 "write_zeroes": true, 00:07:26.289 "zcopy": true, 00:07:26.289 "get_zone_info": false, 00:07:26.289 "zone_management": false, 00:07:26.289 "zone_append": false, 00:07:26.289 "compare": false, 00:07:26.289 "compare_and_write": false, 00:07:26.289 "abort": true, 00:07:26.289 "seek_hole": false, 00:07:26.289 "seek_data": false, 00:07:26.289 "copy": true, 00:07:26.289 "nvme_iov_md": false 00:07:26.289 }, 00:07:26.289 "memory_domains": [ 00:07:26.289 { 00:07:26.289 "dma_device_id": "system", 00:07:26.289 "dma_device_type": 1 00:07:26.289 }, 00:07:26.289 { 00:07:26.289 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:26.289 "dma_device_type": 2 00:07:26.289 } 00:07:26.289 ], 00:07:26.289 "driver_specific": {} 00:07:26.289 } 00:07:26.289 ]' 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:26.289 07:54:17 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:26.855 07:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:26.855 07:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:26.855 07:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:26.855 07:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:26.855 07:54:18 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:29.413 07:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:29.413 07:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:29.413 07:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:29.413 07:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:29.413 07:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:29.413 07:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:29.413 07:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:29.413 07:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:29.413 07:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:29.413 07:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:29.413 07:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:29.413 07:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:29.413 07:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:29.413 07:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:29.413 07:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:29.413 07:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:29.413 07:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:29.413 07:54:20 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:29.979 07:54:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:30.912 07:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:30.912 07:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:30.912 07:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:30.912 07:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.912 07:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:30.912 ************************************ 00:07:30.912 START TEST filesystem_ext4 00:07:30.912 ************************************ 00:07:30.912 07:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:30.912 07:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:30.912 07:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:30.912 07:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:30.912 07:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:30.912 07:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:30.912 07:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:30.912 07:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:30.912 07:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:30.912 07:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:30.912 07:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:30.912 mke2fs 1.46.5 (30-Dec-2021) 00:07:30.912 Discarding device blocks: 0/522240 done 00:07:30.912 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:30.912 Filesystem UUID: dfda0b46-6913-45e5-bda2-9708ceb14981 00:07:30.912 Superblock backups stored on blocks: 00:07:30.912 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:30.912 00:07:30.912 Allocating group tables: 0/64 done 00:07:30.912 Writing inode tables: 0/64 done 00:07:31.169 Creating journal (8192 blocks): done 00:07:31.170 Writing superblocks and filesystem accounting information: 0/64 done 00:07:31.170 00:07:31.170 07:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:31.170 07:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:31.427 07:54:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1844456 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:31.427 00:07:31.427 real 0m0.581s 00:07:31.427 user 0m0.012s 00:07:31.427 sys 0m0.063s 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:31.427 ************************************ 00:07:31.427 END TEST filesystem_ext4 00:07:31.427 ************************************ 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:31.427 ************************************ 00:07:31.427 START TEST filesystem_btrfs 00:07:31.427 ************************************ 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:31.427 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:31.684 btrfs-progs v6.6.2 00:07:31.684 See https://btrfs.readthedocs.io for more information. 00:07:31.684 00:07:31.684 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:31.684 NOTE: several default settings have changed in version 5.15, please make sure 00:07:31.684 this does not affect your deployments: 00:07:31.684 - DUP for metadata (-m dup) 00:07:31.684 - enabled no-holes (-O no-holes) 00:07:31.684 - enabled free-space-tree (-R free-space-tree) 00:07:31.685 00:07:31.685 Label: (null) 00:07:31.685 UUID: cfa73e95-9c27-4803-8718-fd7882244560 00:07:31.685 Node size: 16384 00:07:31.685 Sector size: 4096 00:07:31.685 Filesystem size: 510.00MiB 00:07:31.685 Block group profiles: 00:07:31.685 Data: single 8.00MiB 00:07:31.685 Metadata: DUP 32.00MiB 00:07:31.685 System: DUP 8.00MiB 00:07:31.685 SSD detected: yes 00:07:31.685 Zoned device: no 00:07:31.685 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:31.685 Runtime features: free-space-tree 00:07:31.685 Checksum: crc32c 00:07:31.685 Number of devices: 1 00:07:31.685 Devices: 00:07:31.685 ID SIZE PATH 00:07:31.685 1 510.00MiB /dev/nvme0n1p1 00:07:31.685 00:07:31.685 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:31.685 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1844456 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:32.249 00:07:32.249 real 0m0.672s 00:07:32.249 user 0m0.021s 00:07:32.249 sys 0m0.112s 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:32.249 ************************************ 00:07:32.249 END TEST filesystem_btrfs 00:07:32.249 ************************************ 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:32.249 ************************************ 00:07:32.249 START TEST filesystem_xfs 00:07:32.249 ************************************ 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:32.249 07:54:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:32.249 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:32.249 = sectsz=512 attr=2, projid32bit=1 00:07:32.249 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:32.249 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:32.249 data = bsize=4096 blocks=130560, imaxpct=25 00:07:32.249 = sunit=0 swidth=0 blks 00:07:32.249 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:32.249 log =internal log bsize=4096 blocks=16384, version=2 00:07:32.249 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:32.249 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:33.183 Discarding blocks...Done. 00:07:33.183 07:54:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:33.183 07:54:24 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:35.080 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:35.080 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:35.080 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:35.080 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:35.080 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:35.080 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:35.080 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1844456 00:07:35.080 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:35.080 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:35.080 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:35.080 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:35.080 00:07:35.080 real 0m2.813s 00:07:35.080 user 0m0.015s 00:07:35.080 sys 0m0.063s 00:07:35.080 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.080 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:35.080 ************************************ 00:07:35.080 END TEST filesystem_xfs 00:07:35.080 ************************************ 00:07:35.080 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:35.080 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:35.080 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:35.080 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:35.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1844456 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1844456 ']' 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1844456 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1844456 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1844456' 00:07:35.384 killing process with pid 1844456 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1844456 00:07:35.384 07:54:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1844456 00:07:35.641 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:35.641 00:07:35.641 real 0m10.031s 00:07:35.641 user 0m38.328s 00:07:35.641 sys 0m1.671s 00:07:35.641 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.641 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.641 ************************************ 00:07:35.641 END TEST nvmf_filesystem_no_in_capsule 00:07:35.641 ************************************ 00:07:35.641 07:54:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:35.641 07:54:27 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:35.641 07:54:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:35.641 07:54:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.641 07:54:27 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:35.898 ************************************ 00:07:35.898 START TEST nvmf_filesystem_in_capsule 00:07:35.898 ************************************ 00:07:35.898 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:35.898 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:35.898 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:35.898 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:35.898 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:35.898 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.898 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1845828 00:07:35.898 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:35.898 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1845828 00:07:35.898 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1845828 ']' 00:07:35.898 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.898 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:35.898 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.898 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:35.898 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:35.898 [2024-07-13 07:54:27.443282] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:35.898 [2024-07-13 07:54:27.443370] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:35.898 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.898 [2024-07-13 07:54:27.508050] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:35.898 [2024-07-13 07:54:27.597354] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:35.898 [2024-07-13 07:54:27.597430] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:35.898 [2024-07-13 07:54:27.597458] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:35.898 [2024-07-13 07:54:27.597469] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:35.898 [2024-07-13 07:54:27.597479] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:35.898 [2024-07-13 07:54:27.597561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:35.898 [2024-07-13 07:54:27.597628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:35.898 [2024-07-13 07:54:27.597694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:35.898 [2024-07-13 07:54:27.597697] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.155 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:36.155 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:36.155 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:36.155 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:36.155 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.155 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.155 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:36.155 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:36.155 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.155 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.155 [2024-07-13 07:54:27.752761] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.155 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.155 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:36.155 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.155 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.412 Malloc1 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.412 [2024-07-13 07:54:27.943126] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:36.412 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:36.412 { 00:07:36.412 "name": "Malloc1", 00:07:36.412 "aliases": [ 00:07:36.412 "c698c2d7-3eda-4ffc-b32a-95f0581179bc" 00:07:36.412 ], 00:07:36.412 "product_name": "Malloc disk", 00:07:36.412 "block_size": 512, 00:07:36.412 "num_blocks": 1048576, 00:07:36.412 "uuid": "c698c2d7-3eda-4ffc-b32a-95f0581179bc", 00:07:36.412 "assigned_rate_limits": { 00:07:36.412 "rw_ios_per_sec": 0, 00:07:36.412 "rw_mbytes_per_sec": 0, 00:07:36.412 "r_mbytes_per_sec": 0, 00:07:36.412 "w_mbytes_per_sec": 0 00:07:36.412 }, 00:07:36.412 "claimed": true, 00:07:36.412 "claim_type": "exclusive_write", 00:07:36.412 "zoned": false, 00:07:36.412 "supported_io_types": { 00:07:36.412 "read": true, 00:07:36.412 "write": true, 00:07:36.412 "unmap": true, 00:07:36.412 "flush": true, 00:07:36.412 "reset": true, 00:07:36.412 "nvme_admin": false, 00:07:36.412 "nvme_io": false, 00:07:36.412 "nvme_io_md": false, 00:07:36.412 "write_zeroes": true, 00:07:36.412 "zcopy": true, 00:07:36.412 "get_zone_info": false, 00:07:36.412 "zone_management": false, 00:07:36.412 "zone_append": false, 00:07:36.412 "compare": false, 00:07:36.412 "compare_and_write": false, 00:07:36.412 "abort": true, 00:07:36.412 "seek_hole": false, 00:07:36.412 "seek_data": false, 00:07:36.412 "copy": true, 00:07:36.412 "nvme_iov_md": false 00:07:36.412 }, 00:07:36.412 "memory_domains": [ 00:07:36.412 { 00:07:36.412 "dma_device_id": "system", 00:07:36.412 "dma_device_type": 1 00:07:36.412 }, 00:07:36.412 { 00:07:36.412 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:36.412 "dma_device_type": 2 00:07:36.412 } 00:07:36.413 ], 00:07:36.413 "driver_specific": {} 00:07:36.413 } 00:07:36.413 ]' 00:07:36.413 07:54:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:36.413 07:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:36.413 07:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:36.413 07:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:36.413 07:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:36.413 07:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:36.413 07:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:36.413 07:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:36.977 07:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:36.977 07:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:36.977 07:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:36.977 07:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:36.977 07:54:28 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:39.501 07:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:39.501 07:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:39.501 07:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:39.501 07:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:39.501 07:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:39.501 07:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:39.501 07:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:39.501 07:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:39.501 07:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:39.501 07:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:39.501 07:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:39.501 07:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:39.501 07:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:39.501 07:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:39.501 07:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:39.501 07:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:39.501 07:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:39.501 07:54:30 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:40.065 07:54:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:40.995 07:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:07:40.995 07:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:40.995 07:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:40.995 07:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.995 07:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:40.995 ************************************ 00:07:40.995 START TEST filesystem_in_capsule_ext4 00:07:40.995 ************************************ 00:07:40.995 07:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:40.995 07:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:40.995 07:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:40.995 07:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:40.995 07:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:40.995 07:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:40.995 07:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:40.995 07:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:40.995 07:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:40.995 07:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:40.995 07:54:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:40.995 mke2fs 1.46.5 (30-Dec-2021) 00:07:40.995 Discarding device blocks: 0/522240 done 00:07:40.995 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:40.995 Filesystem UUID: 67341106-c5c8-4e54-a621-68d5a96953e8 00:07:40.995 Superblock backups stored on blocks: 00:07:40.995 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:40.995 00:07:40.995 Allocating group tables: 0/64 done 00:07:40.995 Writing inode tables: 0/64 done 00:07:41.933 Creating journal (8192 blocks): done 00:07:41.933 Writing superblocks and filesystem accounting information: 0/64 done 00:07:41.933 00:07:41.933 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:41.933 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1845828 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:42.191 00:07:42.191 real 0m1.208s 00:07:42.191 user 0m0.021s 00:07:42.191 sys 0m0.057s 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:42.191 ************************************ 00:07:42.191 END TEST filesystem_in_capsule_ext4 00:07:42.191 ************************************ 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:42.191 ************************************ 00:07:42.191 START TEST filesystem_in_capsule_btrfs 00:07:42.191 ************************************ 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:42.191 07:54:33 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:42.449 btrfs-progs v6.6.2 00:07:42.449 See https://btrfs.readthedocs.io for more information. 00:07:42.449 00:07:42.449 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:42.449 NOTE: several default settings have changed in version 5.15, please make sure 00:07:42.449 this does not affect your deployments: 00:07:42.449 - DUP for metadata (-m dup) 00:07:42.449 - enabled no-holes (-O no-holes) 00:07:42.449 - enabled free-space-tree (-R free-space-tree) 00:07:42.449 00:07:42.449 Label: (null) 00:07:42.449 UUID: 1c8a5efa-c1c5-4079-b396-f0d42dce5ef1 00:07:42.449 Node size: 16384 00:07:42.449 Sector size: 4096 00:07:42.449 Filesystem size: 510.00MiB 00:07:42.449 Block group profiles: 00:07:42.449 Data: single 8.00MiB 00:07:42.449 Metadata: DUP 32.00MiB 00:07:42.449 System: DUP 8.00MiB 00:07:42.449 SSD detected: yes 00:07:42.449 Zoned device: no 00:07:42.449 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:42.449 Runtime features: free-space-tree 00:07:42.449 Checksum: crc32c 00:07:42.449 Number of devices: 1 00:07:42.449 Devices: 00:07:42.449 ID SIZE PATH 00:07:42.449 1 510.00MiB /dev/nvme0n1p1 00:07:42.449 00:07:42.449 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:42.449 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:43.014 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:43.014 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:07:43.014 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:43.014 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:07:43.014 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:43.014 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:43.014 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1845828 00:07:43.014 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:43.014 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:43.014 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:43.014 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:43.014 00:07:43.014 real 0m0.860s 00:07:43.014 user 0m0.017s 00:07:43.014 sys 0m0.116s 00:07:43.014 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:43.014 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:43.014 ************************************ 00:07:43.014 END TEST filesystem_in_capsule_btrfs 00:07:43.014 ************************************ 00:07:43.014 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:43.014 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:07:43.014 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:43.014 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.014 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.014 ************************************ 00:07:43.015 START TEST filesystem_in_capsule_xfs 00:07:43.015 ************************************ 00:07:43.015 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:43.015 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:43.015 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:43.015 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:43.015 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:43.015 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:43.015 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:43.015 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:07:43.015 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:43.015 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:43.015 07:54:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:43.272 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:43.272 = sectsz=512 attr=2, projid32bit=1 00:07:43.272 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:43.272 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:43.272 data = bsize=4096 blocks=130560, imaxpct=25 00:07:43.272 = sunit=0 swidth=0 blks 00:07:43.272 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:43.272 log =internal log bsize=4096 blocks=16384, version=2 00:07:43.272 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:43.272 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:44.224 Discarding blocks...Done. 00:07:44.224 07:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:44.224 07:54:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:46.760 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:46.760 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:07:46.760 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:46.760 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1845828 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:46.761 00:07:46.761 real 0m3.358s 00:07:46.761 user 0m0.014s 00:07:46.761 sys 0m0.065s 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:46.761 ************************************ 00:07:46.761 END TEST filesystem_in_capsule_xfs 00:07:46.761 ************************************ 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:46.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1845828 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1845828 ']' 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1845828 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1845828 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1845828' 00:07:46.761 killing process with pid 1845828 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1845828 00:07:46.761 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1845828 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:47.328 00:07:47.328 real 0m11.366s 00:07:47.328 user 0m43.650s 00:07:47.328 sys 0m1.778s 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.328 ************************************ 00:07:47.328 END TEST nvmf_filesystem_in_capsule 00:07:47.328 ************************************ 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:47.328 rmmod nvme_tcp 00:07:47.328 rmmod nvme_fabrics 00:07:47.328 rmmod nvme_keyring 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:47.328 07:54:38 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.234 07:54:40 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:49.234 00:07:49.234 real 0m25.809s 00:07:49.234 user 1m22.836s 00:07:49.234 sys 0m4.994s 00:07:49.234 07:54:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:49.234 07:54:40 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:49.234 ************************************ 00:07:49.234 END TEST nvmf_filesystem 00:07:49.234 ************************************ 00:07:49.234 07:54:40 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:49.234 07:54:40 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:49.234 07:54:40 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:49.234 07:54:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.234 07:54:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:49.234 ************************************ 00:07:49.234 START TEST nvmf_target_discovery 00:07:49.234 ************************************ 00:07:49.234 07:54:40 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:07:49.492 * Looking for test storage... 00:07:49.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:49.493 07:54:40 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:49.493 07:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:07:49.493 07:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:49.493 07:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:49.493 07:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:49.493 07:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:49.493 07:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:49.493 07:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:49.493 07:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:49.493 07:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:49.493 07:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:49.493 07:54:40 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:07:49.493 07:54:41 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:51.393 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:51.393 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:51.393 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:51.393 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.393 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:51.394 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.394 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.394 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:51.394 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.394 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.394 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:51.394 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:51.394 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.394 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.394 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.394 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.394 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:51.394 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:51.652 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:51.652 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:07:51.652 00:07:51.652 --- 10.0.0.2 ping statistics --- 00:07:51.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.652 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:51.652 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:51.652 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:07:51.652 00:07:51.652 --- 10.0.0.1 ping statistics --- 00:07:51.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:51.652 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1849232 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1849232 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1849232 ']' 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:51.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:51.652 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.652 [2024-07-13 07:54:43.258483] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:51.652 [2024-07-13 07:54:43.258578] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:51.652 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.652 [2024-07-13 07:54:43.334759] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:51.910 [2024-07-13 07:54:43.432747] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:51.910 [2024-07-13 07:54:43.432805] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:51.910 [2024-07-13 07:54:43.432821] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:51.910 [2024-07-13 07:54:43.432835] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:51.910 [2024-07-13 07:54:43.432847] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:51.910 [2024-07-13 07:54:43.432928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:51.911 [2024-07-13 07:54:43.432969] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:51.911 [2024-07-13 07:54:43.433027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:51.911 [2024-07-13 07:54:43.433030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.911 [2024-07-13 07:54:43.572508] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.911 Null1 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.911 [2024-07-13 07:54:43.612794] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.911 Null2 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:51.911 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.169 Null3 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.169 Null4 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.169 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:07:52.427 00:07:52.427 Discovery Log Number of Records 6, Generation counter 6 00:07:52.427 =====Discovery Log Entry 0====== 00:07:52.427 trtype: tcp 00:07:52.427 adrfam: ipv4 00:07:52.427 subtype: current discovery subsystem 00:07:52.427 treq: not required 00:07:52.427 portid: 0 00:07:52.427 trsvcid: 4420 00:07:52.427 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:52.427 traddr: 10.0.0.2 00:07:52.427 eflags: explicit discovery connections, duplicate discovery information 00:07:52.427 sectype: none 00:07:52.427 =====Discovery Log Entry 1====== 00:07:52.427 trtype: tcp 00:07:52.427 adrfam: ipv4 00:07:52.427 subtype: nvme subsystem 00:07:52.427 treq: not required 00:07:52.428 portid: 0 00:07:52.428 trsvcid: 4420 00:07:52.428 subnqn: nqn.2016-06.io.spdk:cnode1 00:07:52.428 traddr: 10.0.0.2 00:07:52.428 eflags: none 00:07:52.428 sectype: none 00:07:52.428 =====Discovery Log Entry 2====== 00:07:52.428 trtype: tcp 00:07:52.428 adrfam: ipv4 00:07:52.428 subtype: nvme subsystem 00:07:52.428 treq: not required 00:07:52.428 portid: 0 00:07:52.428 trsvcid: 4420 00:07:52.428 subnqn: nqn.2016-06.io.spdk:cnode2 00:07:52.428 traddr: 10.0.0.2 00:07:52.428 eflags: none 00:07:52.428 sectype: none 00:07:52.428 =====Discovery Log Entry 3====== 00:07:52.428 trtype: tcp 00:07:52.428 adrfam: ipv4 00:07:52.428 subtype: nvme subsystem 00:07:52.428 treq: not required 00:07:52.428 portid: 0 00:07:52.428 trsvcid: 4420 00:07:52.428 subnqn: nqn.2016-06.io.spdk:cnode3 00:07:52.428 traddr: 10.0.0.2 00:07:52.428 eflags: none 00:07:52.428 sectype: none 00:07:52.428 =====Discovery Log Entry 4====== 00:07:52.428 trtype: tcp 00:07:52.428 adrfam: ipv4 00:07:52.428 subtype: nvme subsystem 00:07:52.428 treq: not required 00:07:52.428 portid: 0 00:07:52.428 trsvcid: 4420 00:07:52.428 subnqn: nqn.2016-06.io.spdk:cnode4 00:07:52.428 traddr: 10.0.0.2 00:07:52.428 eflags: none 00:07:52.428 sectype: none 00:07:52.428 =====Discovery Log Entry 5====== 00:07:52.428 trtype: tcp 00:07:52.428 adrfam: ipv4 00:07:52.428 subtype: discovery subsystem referral 00:07:52.428 treq: not required 00:07:52.428 portid: 0 00:07:52.428 trsvcid: 4430 00:07:52.428 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:07:52.428 traddr: 10.0.0.2 00:07:52.428 eflags: none 00:07:52.428 sectype: none 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:07:52.428 Perform nvmf subsystem discovery via RPC 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.428 [ 00:07:52.428 { 00:07:52.428 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:07:52.428 "subtype": "Discovery", 00:07:52.428 "listen_addresses": [ 00:07:52.428 { 00:07:52.428 "trtype": "TCP", 00:07:52.428 "adrfam": "IPv4", 00:07:52.428 "traddr": "10.0.0.2", 00:07:52.428 "trsvcid": "4420" 00:07:52.428 } 00:07:52.428 ], 00:07:52.428 "allow_any_host": true, 00:07:52.428 "hosts": [] 00:07:52.428 }, 00:07:52.428 { 00:07:52.428 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:07:52.428 "subtype": "NVMe", 00:07:52.428 "listen_addresses": [ 00:07:52.428 { 00:07:52.428 "trtype": "TCP", 00:07:52.428 "adrfam": "IPv4", 00:07:52.428 "traddr": "10.0.0.2", 00:07:52.428 "trsvcid": "4420" 00:07:52.428 } 00:07:52.428 ], 00:07:52.428 "allow_any_host": true, 00:07:52.428 "hosts": [], 00:07:52.428 "serial_number": "SPDK00000000000001", 00:07:52.428 "model_number": "SPDK bdev Controller", 00:07:52.428 "max_namespaces": 32, 00:07:52.428 "min_cntlid": 1, 00:07:52.428 "max_cntlid": 65519, 00:07:52.428 "namespaces": [ 00:07:52.428 { 00:07:52.428 "nsid": 1, 00:07:52.428 "bdev_name": "Null1", 00:07:52.428 "name": "Null1", 00:07:52.428 "nguid": "67C186FFC2BF40A8859B00F8A5D8CF50", 00:07:52.428 "uuid": "67c186ff-c2bf-40a8-859b-00f8a5d8cf50" 00:07:52.428 } 00:07:52.428 ] 00:07:52.428 }, 00:07:52.428 { 00:07:52.428 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:52.428 "subtype": "NVMe", 00:07:52.428 "listen_addresses": [ 00:07:52.428 { 00:07:52.428 "trtype": "TCP", 00:07:52.428 "adrfam": "IPv4", 00:07:52.428 "traddr": "10.0.0.2", 00:07:52.428 "trsvcid": "4420" 00:07:52.428 } 00:07:52.428 ], 00:07:52.428 "allow_any_host": true, 00:07:52.428 "hosts": [], 00:07:52.428 "serial_number": "SPDK00000000000002", 00:07:52.428 "model_number": "SPDK bdev Controller", 00:07:52.428 "max_namespaces": 32, 00:07:52.428 "min_cntlid": 1, 00:07:52.428 "max_cntlid": 65519, 00:07:52.428 "namespaces": [ 00:07:52.428 { 00:07:52.428 "nsid": 1, 00:07:52.428 "bdev_name": "Null2", 00:07:52.428 "name": "Null2", 00:07:52.428 "nguid": "966389512C3646F38FD4D4C52BB0B0FF", 00:07:52.428 "uuid": "96638951-2c36-46f3-8fd4-d4c52bb0b0ff" 00:07:52.428 } 00:07:52.428 ] 00:07:52.428 }, 00:07:52.428 { 00:07:52.428 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:07:52.428 "subtype": "NVMe", 00:07:52.428 "listen_addresses": [ 00:07:52.428 { 00:07:52.428 "trtype": "TCP", 00:07:52.428 "adrfam": "IPv4", 00:07:52.428 "traddr": "10.0.0.2", 00:07:52.428 "trsvcid": "4420" 00:07:52.428 } 00:07:52.428 ], 00:07:52.428 "allow_any_host": true, 00:07:52.428 "hosts": [], 00:07:52.428 "serial_number": "SPDK00000000000003", 00:07:52.428 "model_number": "SPDK bdev Controller", 00:07:52.428 "max_namespaces": 32, 00:07:52.428 "min_cntlid": 1, 00:07:52.428 "max_cntlid": 65519, 00:07:52.428 "namespaces": [ 00:07:52.428 { 00:07:52.428 "nsid": 1, 00:07:52.428 "bdev_name": "Null3", 00:07:52.428 "name": "Null3", 00:07:52.428 "nguid": "24E1F80C42BA4BA2967EA5E7420319DC", 00:07:52.428 "uuid": "24e1f80c-42ba-4ba2-967e-a5e7420319dc" 00:07:52.428 } 00:07:52.428 ] 00:07:52.428 }, 00:07:52.428 { 00:07:52.428 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:07:52.428 "subtype": "NVMe", 00:07:52.428 "listen_addresses": [ 00:07:52.428 { 00:07:52.428 "trtype": "TCP", 00:07:52.428 "adrfam": "IPv4", 00:07:52.428 "traddr": "10.0.0.2", 00:07:52.428 "trsvcid": "4420" 00:07:52.428 } 00:07:52.428 ], 00:07:52.428 "allow_any_host": true, 00:07:52.428 "hosts": [], 00:07:52.428 "serial_number": "SPDK00000000000004", 00:07:52.428 "model_number": "SPDK bdev Controller", 00:07:52.428 "max_namespaces": 32, 00:07:52.428 "min_cntlid": 1, 00:07:52.428 "max_cntlid": 65519, 00:07:52.428 "namespaces": [ 00:07:52.428 { 00:07:52.428 "nsid": 1, 00:07:52.428 "bdev_name": "Null4", 00:07:52.428 "name": "Null4", 00:07:52.428 "nguid": "748DD46775B540AB984042CD5B29442D", 00:07:52.428 "uuid": "748dd467-75b5-40ab-9840-42cd5b29442d" 00:07:52.428 } 00:07:52.428 ] 00:07:52.428 } 00:07:52.428 ] 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.428 07:54:43 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:07:52.428 07:54:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:52.429 07:54:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:07:52.429 07:54:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:52.429 07:54:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:52.429 rmmod nvme_tcp 00:07:52.429 rmmod nvme_fabrics 00:07:52.429 rmmod nvme_keyring 00:07:52.429 07:54:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:52.429 07:54:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:07:52.429 07:54:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:07:52.429 07:54:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1849232 ']' 00:07:52.429 07:54:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1849232 00:07:52.429 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1849232 ']' 00:07:52.429 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1849232 00:07:52.429 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:07:52.429 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:52.429 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1849232 00:07:52.429 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:52.429 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:52.429 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1849232' 00:07:52.429 killing process with pid 1849232 00:07:52.429 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1849232 00:07:52.429 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1849232 00:07:52.687 07:54:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:52.687 07:54:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:52.687 07:54:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:52.688 07:54:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:52.688 07:54:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:52.688 07:54:44 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:52.688 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:52.688 07:54:44 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.224 07:54:46 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:55.224 00:07:55.224 real 0m5.480s 00:07:55.224 user 0m4.562s 00:07:55.224 sys 0m1.859s 00:07:55.224 07:54:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:55.224 07:54:46 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:07:55.224 ************************************ 00:07:55.224 END TEST nvmf_target_discovery 00:07:55.224 ************************************ 00:07:55.224 07:54:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:55.224 07:54:46 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:55.224 07:54:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:55.224 07:54:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.224 07:54:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:55.224 ************************************ 00:07:55.224 START TEST nvmf_referrals 00:07:55.224 ************************************ 00:07:55.224 07:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:07:55.224 * Looking for test storage... 00:07:55.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:55.224 07:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:55.224 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:07:55.225 07:54:46 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:57.127 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:57.127 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:57.127 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:57.127 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:57.127 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:57.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:57.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:07:57.128 00:07:57.128 --- 10.0.0.2 ping statistics --- 00:07:57.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.128 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:57.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:57.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:07:57.128 00:07:57.128 --- 10.0.0.1 ping statistics --- 00:07:57.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:57.128 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1851324 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1851324 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1851324 ']' 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:57.128 07:54:48 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.128 [2024-07-13 07:54:48.806222] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:07:57.128 [2024-07-13 07:54:48.806304] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:57.128 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.386 [2024-07-13 07:54:48.872789] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:57.386 [2024-07-13 07:54:48.964143] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:57.386 [2024-07-13 07:54:48.964205] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:57.386 [2024-07-13 07:54:48.964223] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:57.386 [2024-07-13 07:54:48.964237] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:57.386 [2024-07-13 07:54:48.964249] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:57.386 [2024-07-13 07:54:48.964329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.386 [2024-07-13 07:54:48.964382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.386 [2024-07-13 07:54:48.964505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:57.386 [2024-07-13 07:54:48.964508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.386 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:57.386 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:07:57.386 07:54:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:57.386 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:57.386 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.386 07:54:49 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.386 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.644 [2024-07-13 07:54:49.123574] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.644 [2024-07-13 07:54:49.135775] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.644 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:57.902 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:58.160 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:07:58.160 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:07:58.160 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:07:58.160 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:07:58.160 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:58.160 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:58.160 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:58.160 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:07:58.160 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:07:58.160 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:58.160 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:07:58.160 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:58.160 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:58.160 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:58.160 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:07:58.160 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.161 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.161 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.161 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:07:58.161 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:07:58.161 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:58.161 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:07:58.161 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.161 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:07:58.161 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.161 07:54:49 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.418 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:07:58.418 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:58.418 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:07:58.418 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:58.418 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:58.418 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:58.418 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:58.418 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:58.418 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:07:58.418 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:07:58.418 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:07:58.418 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:07:58.418 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:07:58.418 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:58.418 07:54:49 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:07:58.418 07:54:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:07:58.418 07:54:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:07:58.418 07:54:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:07:58.418 07:54:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:07:58.418 07:54:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:58.418 07:54:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:58.676 07:54:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:58.676 rmmod nvme_tcp 00:07:58.933 rmmod nvme_fabrics 00:07:58.933 rmmod nvme_keyring 00:07:58.933 07:54:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:58.933 07:54:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:07:58.933 07:54:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:07:58.933 07:54:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1851324 ']' 00:07:58.933 07:54:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1851324 00:07:58.933 07:54:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1851324 ']' 00:07:58.933 07:54:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1851324 00:07:58.933 07:54:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:07:58.933 07:54:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:58.933 07:54:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1851324 00:07:58.933 07:54:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:58.933 07:54:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:58.933 07:54:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1851324' 00:07:58.933 killing process with pid 1851324 00:07:58.933 07:54:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1851324 00:07:58.933 07:54:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1851324 00:07:59.190 07:54:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:59.190 07:54:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:59.190 07:54:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:59.190 07:54:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:59.190 07:54:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:59.190 07:54:50 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:59.190 07:54:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:59.190 07:54:50 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.145 07:54:52 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:01.145 00:08:01.145 real 0m6.276s 00:08:01.145 user 0m8.336s 00:08:01.145 sys 0m2.110s 00:08:01.145 07:54:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:01.145 07:54:52 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:01.145 ************************************ 00:08:01.145 END TEST nvmf_referrals 00:08:01.145 ************************************ 00:08:01.145 07:54:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:01.145 07:54:52 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:01.145 07:54:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:01.145 07:54:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.145 07:54:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:01.145 ************************************ 00:08:01.145 START TEST nvmf_connect_disconnect 00:08:01.145 ************************************ 00:08:01.145 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:01.145 * Looking for test storage... 00:08:01.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.145 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.145 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:01.145 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.145 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.145 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.145 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.145 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.145 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.145 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.145 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.145 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.145 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:01.403 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:01.404 07:54:52 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.302 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:03.303 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:03.303 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:03.303 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:03.303 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:03.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:08:03.303 00:08:03.303 --- 10.0.0.2 ping statistics --- 00:08:03.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.303 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:08:03.303 00:08:03.303 --- 10.0.0.1 ping statistics --- 00:08:03.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.303 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1853613 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1853613 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1853613 ']' 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:03.303 07:54:54 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.303 [2024-07-13 07:54:55.004753] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:08:03.303 [2024-07-13 07:54:55.004846] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.562 EAL: No free 2048 kB hugepages reported on node 1 00:08:03.562 [2024-07-13 07:54:55.081651] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:03.562 [2024-07-13 07:54:55.181015] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.562 [2024-07-13 07:54:55.181082] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.562 [2024-07-13 07:54:55.181098] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.562 [2024-07-13 07:54:55.181111] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.562 [2024-07-13 07:54:55.181123] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.562 [2024-07-13 07:54:55.181206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.562 [2024-07-13 07:54:55.181265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.562 [2024-07-13 07:54:55.181318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.562 [2024-07-13 07:54:55.181321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.820 [2024-07-13 07:54:55.337719] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:03.820 [2024-07-13 07:54:55.398994] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:03.820 07:54:55 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:06.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:08.871 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:10.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:15.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:17.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:20.230 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:22.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:24.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:27.207 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:29.105 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:31.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:34.157 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:36.055 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:38.583 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:43.038 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.091 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.987 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.422 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.842 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.260 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.787 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.776 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.722 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.143 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:48.171 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.905 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.985 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.882 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.818 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.311 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:26.739 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.264 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.212 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.735 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.196 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.690 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.591 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.112 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.008 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.532 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.426 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.986 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.927 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.353 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.878 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:55.329 rmmod nvme_tcp 00:11:55.329 rmmod nvme_fabrics 00:11:55.329 rmmod nvme_keyring 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1853613 ']' 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1853613 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1853613 ']' 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1853613 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1853613 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1853613' 00:11:55.329 killing process with pid 1853613 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1853613 00:11:55.329 07:58:46 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1853613 00:11:55.588 07:58:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:55.588 07:58:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:55.588 07:58:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:55.588 07:58:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:55.588 07:58:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:55.588 07:58:47 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:55.588 07:58:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:55.588 07:58:47 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.492 07:58:49 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:57.492 00:11:57.492 real 3m56.324s 00:11:57.492 user 14m59.889s 00:11:57.492 sys 0m34.996s 00:11:57.492 07:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:57.492 07:58:49 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:57.492 ************************************ 00:11:57.492 END TEST nvmf_connect_disconnect 00:11:57.492 ************************************ 00:11:57.492 07:58:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:57.492 07:58:49 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:57.492 07:58:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:57.492 07:58:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:57.492 07:58:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:57.492 ************************************ 00:11:57.492 START TEST nvmf_multitarget 00:11:57.492 ************************************ 00:11:57.492 07:58:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:57.492 * Looking for test storage... 00:11:57.492 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:57.492 07:58:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:57.492 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:57.492 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:57.492 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:57.492 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:57.492 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:57.492 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:57.492 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:57.492 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:57.492 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:57.492 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:57.750 07:58:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:57.751 07:58:49 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:57.751 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:57.751 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:57.751 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:57.751 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:57.751 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:57.751 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:57.751 07:58:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:57.751 07:58:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:57.751 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:57.751 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:57.751 07:58:49 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:11:57.751 07:58:49 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:59.651 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:59.651 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:59.651 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:59.651 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:59.651 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.651 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:11:59.651 00:11:59.651 --- 10.0.0.2 ping statistics --- 00:11:59.651 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.651 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:11:59.651 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.651 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.651 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:11:59.651 00:11:59.652 --- 10.0.0.1 ping statistics --- 00:11:59.652 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.652 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1884701 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1884701 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1884701 ']' 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:59.652 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:59.652 07:58:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:59.909 [2024-07-13 07:58:51.387660] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:11:59.909 [2024-07-13 07:58:51.387752] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:59.909 EAL: No free 2048 kB hugepages reported on node 1 00:11:59.909 [2024-07-13 07:58:51.452718] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:59.909 [2024-07-13 07:58:51.542810] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:59.909 [2024-07-13 07:58:51.542892] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:59.909 [2024-07-13 07:58:51.542907] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:59.909 [2024-07-13 07:58:51.542933] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:59.909 [2024-07-13 07:58:51.542951] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:59.909 [2024-07-13 07:58:51.543004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:59.909 [2024-07-13 07:58:51.543070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:59.909 [2024-07-13 07:58:51.543121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:59.909 [2024-07-13 07:58:51.543123] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.166 07:58:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:00.166 07:58:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:12:00.166 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:00.166 07:58:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:00.166 07:58:51 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:00.166 07:58:51 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.166 07:58:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:00.166 07:58:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:00.166 07:58:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:00.166 07:58:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:00.166 07:58:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:00.423 "nvmf_tgt_1" 00:12:00.423 07:58:51 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:00.423 "nvmf_tgt_2" 00:12:00.423 07:58:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:00.423 07:58:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:00.423 07:58:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:00.423 07:58:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:00.680 true 00:12:00.681 07:58:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:00.681 true 00:12:00.681 07:58:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:00.681 07:58:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:00.938 rmmod nvme_tcp 00:12:00.938 rmmod nvme_fabrics 00:12:00.938 rmmod nvme_keyring 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1884701 ']' 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1884701 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1884701 ']' 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1884701 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1884701 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1884701' 00:12:00.938 killing process with pid 1884701 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1884701 00:12:00.938 07:58:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1884701 00:12:01.195 07:58:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:01.195 07:58:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:01.195 07:58:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:01.195 07:58:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:01.195 07:58:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:01.195 07:58:52 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.195 07:58:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:01.196 07:58:52 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.156 07:58:54 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:03.156 00:12:03.156 real 0m5.674s 00:12:03.156 user 0m6.354s 00:12:03.156 sys 0m1.908s 00:12:03.156 07:58:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:03.156 07:58:54 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:03.156 ************************************ 00:12:03.156 END TEST nvmf_multitarget 00:12:03.156 ************************************ 00:12:03.156 07:58:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:03.156 07:58:54 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:03.156 07:58:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:03.156 07:58:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:03.156 07:58:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:03.414 ************************************ 00:12:03.414 START TEST nvmf_rpc 00:12:03.414 ************************************ 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:03.414 * Looking for test storage... 00:12:03.414 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:03.414 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.415 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.415 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.415 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:03.415 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:03.415 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:03.415 07:58:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:03.415 07:58:54 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:03.415 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:03.415 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.415 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:03.415 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:03.415 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:03.415 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.415 07:58:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:03.415 07:58:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.415 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:03.415 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:03.415 07:58:54 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:03.415 07:58:54 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:05.317 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:05.317 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:05.317 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:05.317 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:05.317 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:05.576 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:05.576 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:12:05.576 00:12:05.576 --- 10.0.0.2 ping statistics --- 00:12:05.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.576 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:05.576 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:05.576 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:12:05.576 00:12:05.576 --- 10.0.0.1 ping statistics --- 00:12:05.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:05.576 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1886794 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1886794 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1886794 ']' 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:05.576 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:05.576 [2024-07-13 07:58:57.262835] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:05.576 [2024-07-13 07:58:57.262933] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.577 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.835 [2024-07-13 07:58:57.337211] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:05.835 [2024-07-13 07:58:57.432059] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.835 [2024-07-13 07:58:57.432119] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.835 [2024-07-13 07:58:57.432135] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.835 [2024-07-13 07:58:57.432157] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.835 [2024-07-13 07:58:57.432169] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.835 [2024-07-13 07:58:57.432226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.835 [2024-07-13 07:58:57.432283] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:05.835 [2024-07-13 07:58:57.432344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:05.835 [2024-07-13 07:58:57.432344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:05.835 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:05.835 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:05.835 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:05.835 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:05.835 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:06.093 "tick_rate": 2700000000, 00:12:06.093 "poll_groups": [ 00:12:06.093 { 00:12:06.093 "name": "nvmf_tgt_poll_group_000", 00:12:06.093 "admin_qpairs": 0, 00:12:06.093 "io_qpairs": 0, 00:12:06.093 "current_admin_qpairs": 0, 00:12:06.093 "current_io_qpairs": 0, 00:12:06.093 "pending_bdev_io": 0, 00:12:06.093 "completed_nvme_io": 0, 00:12:06.093 "transports": [] 00:12:06.093 }, 00:12:06.093 { 00:12:06.093 "name": "nvmf_tgt_poll_group_001", 00:12:06.093 "admin_qpairs": 0, 00:12:06.093 "io_qpairs": 0, 00:12:06.093 "current_admin_qpairs": 0, 00:12:06.093 "current_io_qpairs": 0, 00:12:06.093 "pending_bdev_io": 0, 00:12:06.093 "completed_nvme_io": 0, 00:12:06.093 "transports": [] 00:12:06.093 }, 00:12:06.093 { 00:12:06.093 "name": "nvmf_tgt_poll_group_002", 00:12:06.093 "admin_qpairs": 0, 00:12:06.093 "io_qpairs": 0, 00:12:06.093 "current_admin_qpairs": 0, 00:12:06.093 "current_io_qpairs": 0, 00:12:06.093 "pending_bdev_io": 0, 00:12:06.093 "completed_nvme_io": 0, 00:12:06.093 "transports": [] 00:12:06.093 }, 00:12:06.093 { 00:12:06.093 "name": "nvmf_tgt_poll_group_003", 00:12:06.093 "admin_qpairs": 0, 00:12:06.093 "io_qpairs": 0, 00:12:06.093 "current_admin_qpairs": 0, 00:12:06.093 "current_io_qpairs": 0, 00:12:06.093 "pending_bdev_io": 0, 00:12:06.093 "completed_nvme_io": 0, 00:12:06.093 "transports": [] 00:12:06.093 } 00:12:06.093 ] 00:12:06.093 }' 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.093 [2024-07-13 07:58:57.660800] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.093 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:06.093 "tick_rate": 2700000000, 00:12:06.093 "poll_groups": [ 00:12:06.093 { 00:12:06.093 "name": "nvmf_tgt_poll_group_000", 00:12:06.093 "admin_qpairs": 0, 00:12:06.093 "io_qpairs": 0, 00:12:06.093 "current_admin_qpairs": 0, 00:12:06.093 "current_io_qpairs": 0, 00:12:06.093 "pending_bdev_io": 0, 00:12:06.093 "completed_nvme_io": 0, 00:12:06.093 "transports": [ 00:12:06.093 { 00:12:06.093 "trtype": "TCP" 00:12:06.093 } 00:12:06.093 ] 00:12:06.093 }, 00:12:06.093 { 00:12:06.093 "name": "nvmf_tgt_poll_group_001", 00:12:06.093 "admin_qpairs": 0, 00:12:06.093 "io_qpairs": 0, 00:12:06.093 "current_admin_qpairs": 0, 00:12:06.093 "current_io_qpairs": 0, 00:12:06.093 "pending_bdev_io": 0, 00:12:06.093 "completed_nvme_io": 0, 00:12:06.093 "transports": [ 00:12:06.093 { 00:12:06.093 "trtype": "TCP" 00:12:06.093 } 00:12:06.093 ] 00:12:06.093 }, 00:12:06.093 { 00:12:06.093 "name": "nvmf_tgt_poll_group_002", 00:12:06.093 "admin_qpairs": 0, 00:12:06.093 "io_qpairs": 0, 00:12:06.093 "current_admin_qpairs": 0, 00:12:06.093 "current_io_qpairs": 0, 00:12:06.093 "pending_bdev_io": 0, 00:12:06.093 "completed_nvme_io": 0, 00:12:06.093 "transports": [ 00:12:06.093 { 00:12:06.093 "trtype": "TCP" 00:12:06.093 } 00:12:06.093 ] 00:12:06.094 }, 00:12:06.094 { 00:12:06.094 "name": "nvmf_tgt_poll_group_003", 00:12:06.094 "admin_qpairs": 0, 00:12:06.094 "io_qpairs": 0, 00:12:06.094 "current_admin_qpairs": 0, 00:12:06.094 "current_io_qpairs": 0, 00:12:06.094 "pending_bdev_io": 0, 00:12:06.094 "completed_nvme_io": 0, 00:12:06.094 "transports": [ 00:12:06.094 { 00:12:06.094 "trtype": "TCP" 00:12:06.094 } 00:12:06.094 ] 00:12:06.094 } 00:12:06.094 ] 00:12:06.094 }' 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.094 Malloc1 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.094 [2024-07-13 07:58:57.814276] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:06.094 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:06.352 [2024-07-13 07:58:57.836860] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:06.352 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:06.352 could not add new controller: failed to write to nvme-fabrics device 00:12:06.352 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:06.352 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:06.352 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:06.352 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:06.352 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:06.352 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:06.352 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:06.352 07:58:57 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:06.352 07:58:57 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:06.917 07:58:58 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:06.917 07:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:06.917 07:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:06.917 07:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:06.917 07:58:58 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.441 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:09.441 [2024-07-13 07:59:00.655603] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:09.441 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:09.441 could not add new controller: failed to write to nvme-fabrics device 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.441 07:59:00 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:09.698 07:59:01 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:09.698 07:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:09.698 07:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:09.698 07:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:09.698 07:59:01 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:11.592 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:11.592 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:11.592 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:11.592 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:11.592 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:11.592 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:11.592 07:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:11.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.848 07:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:11.848 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:11.848 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:11.848 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.848 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:11.848 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:11.848 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.849 [2024-07-13 07:59:03.396410] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:11.849 07:59:03 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:12.411 07:59:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:12.411 07:59:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:12.411 07:59:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:12.411 07:59:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:12.411 07:59:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:14.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.933 [2024-07-13 07:59:06.247993] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:14.933 07:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:15.192 07:59:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:15.192 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:15.192 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:15.192 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:15.192 07:59:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:17.723 07:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:17.723 07:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:17.723 07:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:17.723 07:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:17.723 07:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:17.723 07:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:17.723 07:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:17.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.723 07:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:17.723 07:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:17.723 07:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:17.723 07:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.723 07:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:17.723 07:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:17.723 07:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:17.723 07:59:08 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:17.723 07:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.723 07:59:08 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.723 [2024-07-13 07:59:09.024621] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:17.723 07:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.724 07:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:18.310 07:59:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:18.310 07:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:18.310 07:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:18.310 07:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:18.310 07:59:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:20.218 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:20.218 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:20.218 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:20.218 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:20.218 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:20.218 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:20.218 07:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:20.218 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.218 07:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:20.218 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:20.218 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:20.218 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.218 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:20.218 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:20.218 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:20.218 07:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:20.218 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.218 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.219 [2024-07-13 07:59:11.912474] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:20.219 07:59:11 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.151 07:59:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:21.151 07:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:21.151 07:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.151 07:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:21.151 07:59:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.046 [2024-07-13 07:59:14.728543] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:23.046 07:59:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:23.979 07:59:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:23.979 07:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:23.979 07:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:23.979 07:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:23.979 07:59:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:25.876 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:25.876 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.877 [2024-07-13 07:59:17.513712] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.877 [2024-07-13 07:59:17.561797] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:25.877 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.135 [2024-07-13 07:59:17.610019] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.135 [2024-07-13 07:59:17.658157] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.135 [2024-07-13 07:59:17.706353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.135 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:26.136 "tick_rate": 2700000000, 00:12:26.136 "poll_groups": [ 00:12:26.136 { 00:12:26.136 "name": "nvmf_tgt_poll_group_000", 00:12:26.136 "admin_qpairs": 2, 00:12:26.136 "io_qpairs": 84, 00:12:26.136 "current_admin_qpairs": 0, 00:12:26.136 "current_io_qpairs": 0, 00:12:26.136 "pending_bdev_io": 0, 00:12:26.136 "completed_nvme_io": 184, 00:12:26.136 "transports": [ 00:12:26.136 { 00:12:26.136 "trtype": "TCP" 00:12:26.136 } 00:12:26.136 ] 00:12:26.136 }, 00:12:26.136 { 00:12:26.136 "name": "nvmf_tgt_poll_group_001", 00:12:26.136 "admin_qpairs": 2, 00:12:26.136 "io_qpairs": 84, 00:12:26.136 "current_admin_qpairs": 0, 00:12:26.136 "current_io_qpairs": 0, 00:12:26.136 "pending_bdev_io": 0, 00:12:26.136 "completed_nvme_io": 134, 00:12:26.136 "transports": [ 00:12:26.136 { 00:12:26.136 "trtype": "TCP" 00:12:26.136 } 00:12:26.136 ] 00:12:26.136 }, 00:12:26.136 { 00:12:26.136 "name": "nvmf_tgt_poll_group_002", 00:12:26.136 "admin_qpairs": 1, 00:12:26.136 "io_qpairs": 84, 00:12:26.136 "current_admin_qpairs": 0, 00:12:26.136 "current_io_qpairs": 0, 00:12:26.136 "pending_bdev_io": 0, 00:12:26.136 "completed_nvme_io": 233, 00:12:26.136 "transports": [ 00:12:26.136 { 00:12:26.136 "trtype": "TCP" 00:12:26.136 } 00:12:26.136 ] 00:12:26.136 }, 00:12:26.136 { 00:12:26.136 "name": "nvmf_tgt_poll_group_003", 00:12:26.136 "admin_qpairs": 2, 00:12:26.136 "io_qpairs": 84, 00:12:26.136 "current_admin_qpairs": 0, 00:12:26.136 "current_io_qpairs": 0, 00:12:26.136 "pending_bdev_io": 0, 00:12:26.136 "completed_nvme_io": 135, 00:12:26.136 "transports": [ 00:12:26.136 { 00:12:26.136 "trtype": "TCP" 00:12:26.136 } 00:12:26.136 ] 00:12:26.136 } 00:12:26.136 ] 00:12:26.136 }' 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:26.136 07:59:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:26.136 rmmod nvme_tcp 00:12:26.136 rmmod nvme_fabrics 00:12:26.136 rmmod nvme_keyring 00:12:26.394 07:59:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:26.394 07:59:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:26.394 07:59:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:26.394 07:59:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1886794 ']' 00:12:26.394 07:59:17 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1886794 00:12:26.394 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1886794 ']' 00:12:26.394 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1886794 00:12:26.394 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:12:26.394 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:26.394 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1886794 00:12:26.394 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:26.394 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:26.394 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1886794' 00:12:26.394 killing process with pid 1886794 00:12:26.394 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1886794 00:12:26.394 07:59:17 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1886794 00:12:26.653 07:59:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:26.653 07:59:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:26.653 07:59:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:26.653 07:59:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:26.653 07:59:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:26.653 07:59:18 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:26.653 07:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:26.653 07:59:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.559 07:59:20 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:28.559 00:12:28.559 real 0m25.320s 00:12:28.559 user 1m22.071s 00:12:28.559 sys 0m4.146s 00:12:28.559 07:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:28.559 07:59:20 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.559 ************************************ 00:12:28.559 END TEST nvmf_rpc 00:12:28.559 ************************************ 00:12:28.559 07:59:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:28.559 07:59:20 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:28.559 07:59:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:28.559 07:59:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:28.559 07:59:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:28.559 ************************************ 00:12:28.559 START TEST nvmf_invalid 00:12:28.559 ************************************ 00:12:28.559 07:59:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:28.818 * Looking for test storage... 00:12:28.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:28.818 07:59:20 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:30.719 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:30.719 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:30.719 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:30.720 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:30.720 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:30.720 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:30.720 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:12:30.720 00:12:30.720 --- 10.0.0.2 ping statistics --- 00:12:30.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.720 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:30.720 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:30.720 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:12:30.720 00:12:30.720 --- 10.0.0.1 ping statistics --- 00:12:30.720 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:30.720 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1891290 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1891290 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1891290 ']' 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:30.720 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:30.720 07:59:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:30.720 [2024-07-13 07:59:22.446652] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:30.720 [2024-07-13 07:59:22.446743] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:30.979 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.979 [2024-07-13 07:59:22.516272] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:30.979 [2024-07-13 07:59:22.610581] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:30.979 [2024-07-13 07:59:22.610644] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:30.979 [2024-07-13 07:59:22.610660] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:30.979 [2024-07-13 07:59:22.610674] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:30.979 [2024-07-13 07:59:22.610685] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:30.979 [2024-07-13 07:59:22.610764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:30.979 [2024-07-13 07:59:22.610816] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:30.979 [2024-07-13 07:59:22.610891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:30.979 [2024-07-13 07:59:22.610894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.236 07:59:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:31.236 07:59:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:12:31.236 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:31.236 07:59:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:31.236 07:59:22 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:31.236 07:59:22 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.236 07:59:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:31.236 07:59:22 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode9451 00:12:31.493 [2024-07-13 07:59:23.042684] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:31.493 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:31.493 { 00:12:31.493 "nqn": "nqn.2016-06.io.spdk:cnode9451", 00:12:31.493 "tgt_name": "foobar", 00:12:31.493 "method": "nvmf_create_subsystem", 00:12:31.493 "req_id": 1 00:12:31.493 } 00:12:31.493 Got JSON-RPC error response 00:12:31.493 response: 00:12:31.493 { 00:12:31.493 "code": -32603, 00:12:31.493 "message": "Unable to find target foobar" 00:12:31.493 }' 00:12:31.493 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:31.493 { 00:12:31.493 "nqn": "nqn.2016-06.io.spdk:cnode9451", 00:12:31.493 "tgt_name": "foobar", 00:12:31.493 "method": "nvmf_create_subsystem", 00:12:31.493 "req_id": 1 00:12:31.493 } 00:12:31.493 Got JSON-RPC error response 00:12:31.493 response: 00:12:31.493 { 00:12:31.493 "code": -32603, 00:12:31.493 "message": "Unable to find target foobar" 00:12:31.493 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:31.493 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:31.493 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode28027 00:12:31.751 [2024-07-13 07:59:23.331668] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode28027: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:31.751 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:31.751 { 00:12:31.751 "nqn": "nqn.2016-06.io.spdk:cnode28027", 00:12:31.751 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:31.751 "method": "nvmf_create_subsystem", 00:12:31.751 "req_id": 1 00:12:31.751 } 00:12:31.751 Got JSON-RPC error response 00:12:31.751 response: 00:12:31.751 { 00:12:31.751 "code": -32602, 00:12:31.751 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:31.751 }' 00:12:31.751 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:31.751 { 00:12:31.751 "nqn": "nqn.2016-06.io.spdk:cnode28027", 00:12:31.751 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:31.751 "method": "nvmf_create_subsystem", 00:12:31.751 "req_id": 1 00:12:31.751 } 00:12:31.751 Got JSON-RPC error response 00:12:31.751 response: 00:12:31.751 { 00:12:31.751 "code": -32602, 00:12:31.751 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:31.751 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:31.751 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:31.751 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7205 00:12:32.008 [2024-07-13 07:59:23.576479] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7205: invalid model number 'SPDK_Controller' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:32.008 { 00:12:32.008 "nqn": "nqn.2016-06.io.spdk:cnode7205", 00:12:32.008 "model_number": "SPDK_Controller\u001f", 00:12:32.008 "method": "nvmf_create_subsystem", 00:12:32.008 "req_id": 1 00:12:32.008 } 00:12:32.008 Got JSON-RPC error response 00:12:32.008 response: 00:12:32.008 { 00:12:32.008 "code": -32602, 00:12:32.008 "message": "Invalid MN SPDK_Controller\u001f" 00:12:32.008 }' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:32.008 { 00:12:32.008 "nqn": "nqn.2016-06.io.spdk:cnode7205", 00:12:32.008 "model_number": "SPDK_Controller\u001f", 00:12:32.008 "method": "nvmf_create_subsystem", 00:12:32.008 "req_id": 1 00:12:32.008 } 00:12:32.008 Got JSON-RPC error response 00:12:32.008 response: 00:12:32.008 { 00:12:32.008 "code": -32602, 00:12:32.008 "message": "Invalid MN SPDK_Controller\u001f" 00:12:32.008 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ h == \- ]] 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'hlr0'\''"7Bum!@,]9?n73#R' 00:12:32.008 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'hlr0'\''"7Bum!@,]9?n73#R' nqn.2016-06.io.spdk:cnode23992 00:12:32.265 [2024-07-13 07:59:23.957728] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23992: invalid serial number 'hlr0'"7Bum!@,]9?n73#R' 00:12:32.265 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:32.265 { 00:12:32.265 "nqn": "nqn.2016-06.io.spdk:cnode23992", 00:12:32.265 "serial_number": "hlr0'\''\"7Bum!@,]9?n73#R", 00:12:32.265 "method": "nvmf_create_subsystem", 00:12:32.265 "req_id": 1 00:12:32.265 } 00:12:32.265 Got JSON-RPC error response 00:12:32.265 response: 00:12:32.265 { 00:12:32.266 "code": -32602, 00:12:32.266 "message": "Invalid SN hlr0'\''\"7Bum!@,]9?n73#R" 00:12:32.266 }' 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:32.266 { 00:12:32.266 "nqn": "nqn.2016-06.io.spdk:cnode23992", 00:12:32.266 "serial_number": "hlr0'\"7Bum!@,]9?n73#R", 00:12:32.266 "method": "nvmf_create_subsystem", 00:12:32.266 "req_id": 1 00:12:32.266 } 00:12:32.266 Got JSON-RPC error response 00:12:32.266 response: 00:12:32.266 { 00:12:32.266 "code": -32602, 00:12:32.266 "message": "Invalid SN hlr0'\"7Bum!@,]9?n73#R" 00:12:32.266 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.266 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.523 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:12:32.523 07:59:23 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:12:32.523 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:12:32.523 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.523 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.523 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:32.523 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:32.523 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:32.523 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.523 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.523 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:32.523 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:32.523 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:32.523 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.523 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.523 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.524 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ u == \- ]] 00:12:32.525 07:59:24 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'uL,+"3y\c'\''o /dev/null' 00:12:35.135 07:59:26 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.668 07:59:28 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:37.668 00:12:37.668 real 0m8.595s 00:12:37.668 user 0m20.413s 00:12:37.668 sys 0m2.371s 00:12:37.668 07:59:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:37.668 07:59:28 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:37.668 ************************************ 00:12:37.668 END TEST nvmf_invalid 00:12:37.668 ************************************ 00:12:37.668 07:59:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:37.668 07:59:28 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:37.668 07:59:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:37.668 07:59:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.668 07:59:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:37.668 ************************************ 00:12:37.668 START TEST nvmf_abort 00:12:37.668 ************************************ 00:12:37.668 07:59:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:37.668 * Looking for test storage... 00:12:37.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.668 07:59:28 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.668 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:12:37.668 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.668 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.668 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.668 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.668 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.668 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.668 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.668 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.668 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.668 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.668 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:37.668 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:37.668 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.668 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:12:37.669 07:59:28 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:39.571 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:39.571 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:39.571 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:39.571 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.571 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:39.572 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:39.572 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:39.572 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:39.572 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:39.572 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:39.572 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.572 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:39.572 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:39.572 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:39.572 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:39.572 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:39.572 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:39.572 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:39.572 07:59:30 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:39.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:12:39.572 00:12:39.572 --- 10.0.0.2 ping statistics --- 00:12:39.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.572 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:39.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:12:39.572 00:12:39.572 --- 10.0.0.1 ping statistics --- 00:12:39.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.572 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1893850 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1893850 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1893850 ']' 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:39.572 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:39.572 [2024-07-13 07:59:31.128501] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:39.572 [2024-07-13 07:59:31.128574] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.572 EAL: No free 2048 kB hugepages reported on node 1 00:12:39.572 [2024-07-13 07:59:31.200912] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:39.572 [2024-07-13 07:59:31.299860] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.572 [2024-07-13 07:59:31.299924] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.572 [2024-07-13 07:59:31.299940] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:39.572 [2024-07-13 07:59:31.299954] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:39.572 [2024-07-13 07:59:31.299966] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.572 [2024-07-13 07:59:31.300056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.572 [2024-07-13 07:59:31.300112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.572 [2024-07-13 07:59:31.300116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:39.831 [2024-07-13 07:59:31.440816] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:39.831 Malloc0 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:39.831 Delay0 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:39.831 [2024-07-13 07:59:31.506884] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:39.831 07:59:31 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:39.831 EAL: No free 2048 kB hugepages reported on node 1 00:12:40.089 [2024-07-13 07:59:31.655039] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:42.618 Initializing NVMe Controllers 00:12:42.618 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:42.618 controller IO queue size 128 less than required 00:12:42.618 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:42.618 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:42.618 Initialization complete. Launching workers. 00:12:42.618 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 31466 00:12:42.618 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 31527, failed to submit 62 00:12:42.618 success 31470, unsuccess 57, failed 0 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:42.618 rmmod nvme_tcp 00:12:42.618 rmmod nvme_fabrics 00:12:42.618 rmmod nvme_keyring 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1893850 ']' 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1893850 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1893850 ']' 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1893850 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1893850 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1893850' 00:12:42.618 killing process with pid 1893850 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1893850 00:12:42.618 07:59:33 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1893850 00:12:42.618 07:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:42.618 07:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:42.618 07:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:42.618 07:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:42.618 07:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:42.618 07:59:34 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:42.618 07:59:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:42.618 07:59:34 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.519 07:59:36 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:44.519 00:12:44.519 real 0m7.240s 00:12:44.519 user 0m10.312s 00:12:44.519 sys 0m2.696s 00:12:44.519 07:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:44.519 07:59:36 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:12:44.519 ************************************ 00:12:44.519 END TEST nvmf_abort 00:12:44.519 ************************************ 00:12:44.519 07:59:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:44.520 07:59:36 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:44.520 07:59:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:44.520 07:59:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:44.520 07:59:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:44.520 ************************************ 00:12:44.520 START TEST nvmf_ns_hotplug_stress 00:12:44.520 ************************************ 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:44.520 * Looking for test storage... 00:12:44.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:12:44.520 07:59:36 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:47.053 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.053 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:47.054 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:47.054 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:47.054 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:47.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:47.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:12:47.054 00:12:47.054 --- 10.0.0.2 ping statistics --- 00:12:47.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.054 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:47.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:47.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:12:47.054 00:12:47.054 --- 10.0.0.1 ping statistics --- 00:12:47.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:47.054 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1896138 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1896138 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1896138 ']' 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:47.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:47.054 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.054 [2024-07-13 07:59:38.445837] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:12:47.054 [2024-07-13 07:59:38.445941] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:47.055 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.055 [2024-07-13 07:59:38.525458] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:47.055 [2024-07-13 07:59:38.623227] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:47.055 [2024-07-13 07:59:38.623295] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:47.055 [2024-07-13 07:59:38.623311] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:47.055 [2024-07-13 07:59:38.623325] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:47.055 [2024-07-13 07:59:38.623337] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:47.055 [2024-07-13 07:59:38.626891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:47.055 [2024-07-13 07:59:38.626958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:47.055 [2024-07-13 07:59:38.626962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:47.055 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:47.055 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:12:47.055 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:47.055 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:47.055 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:12:47.055 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.055 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:47.055 07:59:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:47.313 [2024-07-13 07:59:38.991105] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:47.313 07:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:47.571 07:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.829 [2024-07-13 07:59:39.490645] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.829 07:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:48.087 07:59:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:48.345 Malloc0 00:12:48.345 07:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:48.604 Delay0 00:12:48.604 07:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:48.863 07:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:49.127 NULL1 00:12:49.127 07:59:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:49.383 07:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1896439 00:12:49.383 07:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:49.383 07:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:12:49.383 07:59:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:49.383 EAL: No free 2048 kB hugepages reported on node 1 00:12:50.752 Read completed with error (sct=0, sc=11) 00:12:50.752 07:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:50.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:50.752 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:51.009 07:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:51.009 07:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:51.266 true 00:12:51.266 07:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:12:51.266 07:59:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.827 07:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.084 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:52.339 07:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:52.340 07:59:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:52.597 true 00:12:52.597 07:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:12:52.597 07:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.854 07:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:52.854 07:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:52.854 07:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:53.110 true 00:12:53.110 07:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:12:53.110 07:59:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:53.366 07:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:53.621 07:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:12:53.621 07:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:12:53.878 true 00:12:53.878 07:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:12:53.878 07:59:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:55.249 07:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:55.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:55.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:55.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:55.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:55.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:55.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:55.249 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:55.249 07:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:12:55.249 07:59:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:12:55.506 true 00:12:55.506 07:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:12:55.506 07:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:56.436 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:56.436 07:59:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.694 07:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:12:56.694 07:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:12:56.951 true 00:12:56.951 07:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:12:56.951 07:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.208 07:59:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.491 07:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:12:57.491 07:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:12:57.748 true 00:12:57.748 07:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:12:57.748 07:59:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:58.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.681 07:59:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:58.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.681 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:58.938 07:59:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:12:58.938 07:59:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:12:59.195 true 00:12:59.195 07:59:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:12:59.195 07:59:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.452 07:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.709 07:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:12:59.709 07:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:12:59.967 true 00:12:59.967 07:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:12:59.967 07:59:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.900 07:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:00.900 07:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:00.900 07:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:01.158 true 00:13:01.158 07:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:13:01.158 07:59:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.414 07:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.672 07:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:01.672 07:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:01.934 true 00:13:01.934 07:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:13:01.934 07:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:02.191 07:59:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:02.448 07:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:02.448 07:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:02.705 true 00:13:02.705 07:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:13:02.705 07:59:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.076 07:59:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.076 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:04.076 07:59:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:04.076 07:59:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:04.333 true 00:13:04.333 07:59:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:13:04.333 07:59:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:04.590 07:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:04.848 07:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:04.848 07:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:05.105 true 00:13:05.105 07:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:13:05.105 07:59:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.037 07:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:06.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.037 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:06.294 07:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:06.294 07:59:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:06.551 true 00:13:06.551 07:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:13:06.551 07:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.807 07:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.065 07:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:07.065 07:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:07.323 true 00:13:07.323 07:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:13:07.323 07:59:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.256 07:59:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.513 08:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:08.513 08:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:08.769 true 00:13:08.769 08:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:13:08.769 08:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.027 08:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:09.284 08:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:09.285 08:00:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:09.541 true 00:13:09.541 08:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:13:09.542 08:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:09.800 08:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.057 08:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:10.057 08:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:10.312 true 00:13:10.312 08:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:13:10.312 08:00:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:11.242 08:00:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:11.499 08:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:11.499 08:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:11.756 true 00:13:11.756 08:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:13:11.756 08:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.013 08:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.271 08:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:12.271 08:00:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:12.528 true 00:13:12.528 08:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:13:12.528 08:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.456 08:00:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:13.714 08:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:13.714 08:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:13.714 true 00:13:13.972 08:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:13:13.972 08:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:13.972 08:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.231 08:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:14.231 08:00:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:14.489 true 00:13:14.489 08:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:13:14.489 08:00:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.419 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:15.676 08:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.932 08:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:15.932 08:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:16.188 true 00:13:16.188 08:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:13:16.188 08:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.444 08:00:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.700 08:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:16.700 08:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:16.957 true 00:13:16.957 08:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:13:16.957 08:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.214 08:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.471 08:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:17.471 08:00:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:17.728 true 00:13:17.728 08:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:13:17.728 08:00:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.656 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:18.656 08:00:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.913 08:00:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:18.913 08:00:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:19.169 true 00:13:19.169 08:00:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:13:19.169 08:00:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.426 08:00:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.683 08:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:19.683 08:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:19.683 Initializing NVMe Controllers 00:13:19.683 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:19.683 Controller IO queue size 128, less than required. 00:13:19.683 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:19.683 Controller IO queue size 128, less than required. 00:13:19.683 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:19.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:19.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:19.683 Initialization complete. Launching workers. 00:13:19.683 ======================================================== 00:13:19.683 Latency(us) 00:13:19.683 Device Information : IOPS MiB/s Average min max 00:13:19.683 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1123.40 0.55 56772.55 2610.49 1011836.10 00:13:19.683 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10426.27 5.09 12276.52 1829.27 452338.13 00:13:19.683 ======================================================== 00:13:19.683 Total : 11549.67 5.64 16604.50 1829.27 1011836.10 00:13:19.683 00:13:19.940 true 00:13:19.940 08:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1896439 00:13:19.940 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1896439) - No such process 00:13:19.940 08:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1896439 00:13:19.940 08:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.196 08:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:20.453 08:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:20.453 08:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:20.453 08:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:20.453 08:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:20.453 08:00:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:20.710 null0 00:13:20.710 08:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:20.710 08:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:20.710 08:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:20.710 null1 00:13:20.967 08:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:20.967 08:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:20.967 08:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:20.967 null2 00:13:21.224 08:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:21.224 08:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:21.224 08:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:21.224 null3 00:13:21.224 08:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:21.224 08:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:21.224 08:00:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:21.481 null4 00:13:21.481 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:21.481 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:21.481 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:21.738 null5 00:13:21.738 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:21.738 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:21.739 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:21.997 null6 00:13:21.997 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:21.997 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:21.997 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:22.256 null7 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1901086 1901087 1901089 1901091 1901093 1901095 1901097 1901099 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.256 08:00:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:22.514 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:22.514 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:22.514 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.514 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:22.514 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:22.514 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:22.514 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:22.514 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:22.773 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:23.030 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:23.030 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.030 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:23.287 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:23.287 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:23.287 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:23.287 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:23.287 08:00:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:23.544 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.544 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.544 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:23.544 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.544 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.544 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:23.544 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.544 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.544 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:23.544 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.544 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.544 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:23.544 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.544 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.544 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:23.544 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.544 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.544 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:23.544 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.544 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.544 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:23.545 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:23.545 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:23.545 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:23.802 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:23.802 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:23.802 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.802 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:23.802 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:23.802 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:23.802 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:23.802 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.068 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:24.329 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:24.329 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:24.329 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.329 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:24.330 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:24.330 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:24.330 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:24.330 08:00:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:24.587 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:24.845 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.845 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:24.845 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:24.845 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:24.845 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:24.845 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:24.845 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:24.845 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:25.102 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.103 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:25.360 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.360 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:25.360 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:25.360 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:25.360 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:25.360 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:25.360 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:25.360 08:00:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:25.683 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:25.942 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.942 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:25.942 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:25.942 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:25.942 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:25.942 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:25.942 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:25.942 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.199 08:00:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:26.457 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:26.457 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.457 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:26.457 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:26.457 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:26.457 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:26.457 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:26.457 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:26.715 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:26.974 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:26.974 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:26.974 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:26.974 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:26.974 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.974 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:26.974 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:26.974 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.232 08:00:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:27.516 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:27.516 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:27.516 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:27.516 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:27.517 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.517 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:27.517 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:27.517 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:27.773 rmmod nvme_tcp 00:13:27.773 rmmod nvme_fabrics 00:13:27.773 rmmod nvme_keyring 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1896138 ']' 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1896138 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1896138 ']' 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1896138 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:27.773 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1896138 00:13:28.032 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:28.032 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:28.032 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1896138' 00:13:28.032 killing process with pid 1896138 00:13:28.032 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1896138 00:13:28.032 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1896138 00:13:28.032 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:28.032 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:28.032 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:28.032 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:28.032 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:28.032 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.032 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.032 08:00:19 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.557 08:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:30.557 00:13:30.557 real 0m45.617s 00:13:30.557 user 3m28.802s 00:13:30.557 sys 0m15.894s 00:13:30.557 08:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:30.557 08:00:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:30.557 ************************************ 00:13:30.557 END TEST nvmf_ns_hotplug_stress 00:13:30.557 ************************************ 00:13:30.557 08:00:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:30.557 08:00:21 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:30.557 08:00:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:30.557 08:00:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:30.557 08:00:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:30.557 ************************************ 00:13:30.557 START TEST nvmf_connect_stress 00:13:30.557 ************************************ 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:30.557 * Looking for test storage... 00:13:30.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:30.557 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:30.558 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:30.558 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:30.558 08:00:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:30.558 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:30.558 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:30.558 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:30.558 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:30.558 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:30.558 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.558 08:00:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.558 08:00:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:30.558 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:30.558 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:30.558 08:00:21 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:30.558 08:00:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:32.454 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:32.454 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:32.454 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:32.454 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:32.455 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:32.455 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.455 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:13:32.455 00:13:32.455 --- 10.0.0.2 ping statistics --- 00:13:32.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.455 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:32.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.171 ms 00:13:32.455 00:13:32.455 --- 10.0.0.1 ping statistics --- 00:13:32.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.455 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1903842 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1903842 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1903842 ']' 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:32.455 08:00:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.455 [2024-07-13 08:00:23.989308] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:32.455 [2024-07-13 08:00:23.989395] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.455 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.455 [2024-07-13 08:00:24.059034] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:32.455 [2024-07-13 08:00:24.148809] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.455 [2024-07-13 08:00:24.148882] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.455 [2024-07-13 08:00:24.148900] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.455 [2024-07-13 08:00:24.148914] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.455 [2024-07-13 08:00:24.148925] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.455 [2024-07-13 08:00:24.149012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.455 [2024-07-13 08:00:24.149142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:32.455 [2024-07-13 08:00:24.149146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.713 [2024-07-13 08:00:24.290439] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.713 [2024-07-13 08:00:24.319015] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.713 NULL1 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1903874 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.713 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.713 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.714 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.714 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.714 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.714 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.714 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.714 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.714 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.714 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.714 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:32.714 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.714 08:00:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.714 08:00:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.971 08:00:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:32.971 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:32.971 08:00:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.971 08:00:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:32.971 08:00:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.535 08:00:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.535 08:00:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:33.535 08:00:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.535 08:00:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.535 08:00:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.792 08:00:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:33.792 08:00:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:33.792 08:00:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.792 08:00:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:33.792 08:00:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.048 08:00:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.048 08:00:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:34.048 08:00:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.049 08:00:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.049 08:00:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.305 08:00:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.305 08:00:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:34.305 08:00:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.305 08:00:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.305 08:00:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.869 08:00:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:34.869 08:00:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:34.869 08:00:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.869 08:00:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:34.869 08:00:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.127 08:00:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.127 08:00:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:35.127 08:00:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.127 08:00:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.127 08:00:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.383 08:00:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.383 08:00:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:35.383 08:00:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.383 08:00:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.383 08:00:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.640 08:00:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.640 08:00:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:35.640 08:00:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.640 08:00:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.640 08:00:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.896 08:00:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:35.896 08:00:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:35.896 08:00:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.897 08:00:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:35.897 08:00:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.469 08:00:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.469 08:00:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:36.469 08:00:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.469 08:00:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.469 08:00:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.725 08:00:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.725 08:00:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:36.725 08:00:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.725 08:00:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.725 08:00:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.982 08:00:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:36.982 08:00:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:36.982 08:00:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.982 08:00:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:36.982 08:00:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.238 08:00:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.238 08:00:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:37.238 08:00:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.238 08:00:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.238 08:00:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.494 08:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:37.494 08:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:37.494 08:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.494 08:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:37.494 08:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.058 08:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.058 08:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:38.058 08:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.058 08:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.058 08:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.315 08:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.315 08:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:38.315 08:00:29 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.315 08:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.315 08:00:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.571 08:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.571 08:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:38.571 08:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.571 08:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.571 08:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.828 08:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:38.828 08:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:38.828 08:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.828 08:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:38.828 08:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.102 08:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.102 08:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:39.102 08:00:30 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.102 08:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.102 08:00:30 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.680 08:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.680 08:00:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:39.680 08:00:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.680 08:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.680 08:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.937 08:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:39.937 08:00:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:39.937 08:00:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.937 08:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:39.937 08:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.193 08:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.193 08:00:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:40.193 08:00:31 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.193 08:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.194 08:00:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.450 08:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.450 08:00:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:40.450 08:00:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.450 08:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.450 08:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.716 08:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:40.716 08:00:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:40.716 08:00:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.716 08:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:40.716 08:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.286 08:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.286 08:00:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:41.286 08:00:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.286 08:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.286 08:00:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.542 08:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.542 08:00:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:41.542 08:00:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.542 08:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.543 08:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.800 08:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:41.800 08:00:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:41.800 08:00:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.800 08:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:41.800 08:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.057 08:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.057 08:00:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:42.057 08:00:33 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.057 08:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.057 08:00:33 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.313 08:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.313 08:00:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:42.313 08:00:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.313 08:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.313 08:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.874 08:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:42.874 08:00:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:42.875 08:00:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.875 08:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:42.875 08:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.875 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1903874 00:13:43.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1903874) - No such process 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1903874 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:43.132 rmmod nvme_tcp 00:13:43.132 rmmod nvme_fabrics 00:13:43.132 rmmod nvme_keyring 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1903842 ']' 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1903842 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1903842 ']' 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1903842 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1903842 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1903842' 00:13:43.132 killing process with pid 1903842 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1903842 00:13:43.132 08:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1903842 00:13:43.389 08:00:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:43.389 08:00:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:43.389 08:00:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:43.389 08:00:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:43.390 08:00:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:43.390 08:00:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.390 08:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:43.390 08:00:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.916 08:00:37 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:45.916 00:13:45.916 real 0m15.185s 00:13:45.916 user 0m38.188s 00:13:45.916 sys 0m5.890s 00:13:45.916 08:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:45.917 08:00:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.917 ************************************ 00:13:45.917 END TEST nvmf_connect_stress 00:13:45.917 ************************************ 00:13:45.917 08:00:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:45.917 08:00:37 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:45.917 08:00:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:45.917 08:00:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:45.917 08:00:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:45.917 ************************************ 00:13:45.917 START TEST nvmf_fused_ordering 00:13:45.917 ************************************ 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:45.917 * Looking for test storage... 00:13:45.917 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:45.917 08:00:37 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.819 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:47.819 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:13:47.819 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:47.819 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:47.819 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:47.819 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:47.819 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:47.819 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:13:47.819 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:47.819 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:13:47.819 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:13:47.819 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:13:47.819 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:13:47.819 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:13:47.819 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:13:47.819 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:47.819 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:47.819 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:47.820 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:47.820 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:47.820 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:47.820 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:47.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:13:47.820 00:13:47.820 --- 10.0.0.2 ping statistics --- 00:13:47.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.820 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:47.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:13:47.820 00:13:47.820 --- 10.0.0.1 ping statistics --- 00:13:47.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.820 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1907015 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1907015 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1907015 ']' 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:47.820 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:47.820 [2024-07-13 08:00:39.291954] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:47.820 [2024-07-13 08:00:39.292028] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.820 EAL: No free 2048 kB hugepages reported on node 1 00:13:47.820 [2024-07-13 08:00:39.360451] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.820 [2024-07-13 08:00:39.450982] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.820 [2024-07-13 08:00:39.451047] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.820 [2024-07-13 08:00:39.451063] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:47.820 [2024-07-13 08:00:39.451077] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:47.820 [2024-07-13 08:00:39.451088] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.820 [2024-07-13 08:00:39.451118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.078 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:48.078 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:13:48.078 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:48.078 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:48.078 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.078 08:00:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.078 08:00:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:48.078 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.078 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.078 [2024-07-13 08:00:39.596239] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.078 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.078 08:00:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:48.078 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.078 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.078 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.078 08:00:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.078 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.078 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.078 [2024-07-13 08:00:39.612417] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.078 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.078 08:00:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:48.079 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.079 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.079 NULL1 00:13:48.079 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.079 08:00:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:48.079 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.079 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.079 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.079 08:00:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:48.079 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.079 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.079 08:00:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.079 08:00:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:48.079 [2024-07-13 08:00:39.657128] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:48.079 [2024-07-13 08:00:39.657179] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1907117 ] 00:13:48.079 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.653 Attached to nqn.2016-06.io.spdk:cnode1 00:13:48.653 Namespace ID: 1 size: 1GB 00:13:48.653 fused_ordering(0) 00:13:48.653 fused_ordering(1) 00:13:48.653 fused_ordering(2) 00:13:48.653 fused_ordering(3) 00:13:48.653 fused_ordering(4) 00:13:48.653 fused_ordering(5) 00:13:48.653 fused_ordering(6) 00:13:48.653 fused_ordering(7) 00:13:48.653 fused_ordering(8) 00:13:48.653 fused_ordering(9) 00:13:48.653 fused_ordering(10) 00:13:48.653 fused_ordering(11) 00:13:48.653 fused_ordering(12) 00:13:48.653 fused_ordering(13) 00:13:48.653 fused_ordering(14) 00:13:48.653 fused_ordering(15) 00:13:48.653 fused_ordering(16) 00:13:48.653 fused_ordering(17) 00:13:48.653 fused_ordering(18) 00:13:48.653 fused_ordering(19) 00:13:48.653 fused_ordering(20) 00:13:48.653 fused_ordering(21) 00:13:48.653 fused_ordering(22) 00:13:48.653 fused_ordering(23) 00:13:48.653 fused_ordering(24) 00:13:48.653 fused_ordering(25) 00:13:48.653 fused_ordering(26) 00:13:48.653 fused_ordering(27) 00:13:48.653 fused_ordering(28) 00:13:48.653 fused_ordering(29) 00:13:48.653 fused_ordering(30) 00:13:48.653 fused_ordering(31) 00:13:48.653 fused_ordering(32) 00:13:48.653 fused_ordering(33) 00:13:48.653 fused_ordering(34) 00:13:48.653 fused_ordering(35) 00:13:48.653 fused_ordering(36) 00:13:48.653 fused_ordering(37) 00:13:48.653 fused_ordering(38) 00:13:48.653 fused_ordering(39) 00:13:48.653 fused_ordering(40) 00:13:48.653 fused_ordering(41) 00:13:48.653 fused_ordering(42) 00:13:48.653 fused_ordering(43) 00:13:48.653 fused_ordering(44) 00:13:48.653 fused_ordering(45) 00:13:48.653 fused_ordering(46) 00:13:48.653 fused_ordering(47) 00:13:48.653 fused_ordering(48) 00:13:48.653 fused_ordering(49) 00:13:48.653 fused_ordering(50) 00:13:48.653 fused_ordering(51) 00:13:48.653 fused_ordering(52) 00:13:48.653 fused_ordering(53) 00:13:48.653 fused_ordering(54) 00:13:48.653 fused_ordering(55) 00:13:48.653 fused_ordering(56) 00:13:48.653 fused_ordering(57) 00:13:48.653 fused_ordering(58) 00:13:48.653 fused_ordering(59) 00:13:48.653 fused_ordering(60) 00:13:48.653 fused_ordering(61) 00:13:48.653 fused_ordering(62) 00:13:48.653 fused_ordering(63) 00:13:48.653 fused_ordering(64) 00:13:48.653 fused_ordering(65) 00:13:48.653 fused_ordering(66) 00:13:48.653 fused_ordering(67) 00:13:48.653 fused_ordering(68) 00:13:48.653 fused_ordering(69) 00:13:48.653 fused_ordering(70) 00:13:48.653 fused_ordering(71) 00:13:48.653 fused_ordering(72) 00:13:48.653 fused_ordering(73) 00:13:48.653 fused_ordering(74) 00:13:48.653 fused_ordering(75) 00:13:48.653 fused_ordering(76) 00:13:48.653 fused_ordering(77) 00:13:48.653 fused_ordering(78) 00:13:48.653 fused_ordering(79) 00:13:48.653 fused_ordering(80) 00:13:48.653 fused_ordering(81) 00:13:48.653 fused_ordering(82) 00:13:48.653 fused_ordering(83) 00:13:48.653 fused_ordering(84) 00:13:48.653 fused_ordering(85) 00:13:48.653 fused_ordering(86) 00:13:48.653 fused_ordering(87) 00:13:48.653 fused_ordering(88) 00:13:48.653 fused_ordering(89) 00:13:48.653 fused_ordering(90) 00:13:48.653 fused_ordering(91) 00:13:48.653 fused_ordering(92) 00:13:48.653 fused_ordering(93) 00:13:48.653 fused_ordering(94) 00:13:48.653 fused_ordering(95) 00:13:48.653 fused_ordering(96) 00:13:48.653 fused_ordering(97) 00:13:48.653 fused_ordering(98) 00:13:48.653 fused_ordering(99) 00:13:48.653 fused_ordering(100) 00:13:48.653 fused_ordering(101) 00:13:48.653 fused_ordering(102) 00:13:48.653 fused_ordering(103) 00:13:48.654 fused_ordering(104) 00:13:48.654 fused_ordering(105) 00:13:48.654 fused_ordering(106) 00:13:48.654 fused_ordering(107) 00:13:48.654 fused_ordering(108) 00:13:48.654 fused_ordering(109) 00:13:48.654 fused_ordering(110) 00:13:48.654 fused_ordering(111) 00:13:48.654 fused_ordering(112) 00:13:48.654 fused_ordering(113) 00:13:48.654 fused_ordering(114) 00:13:48.654 fused_ordering(115) 00:13:48.654 fused_ordering(116) 00:13:48.654 fused_ordering(117) 00:13:48.654 fused_ordering(118) 00:13:48.654 fused_ordering(119) 00:13:48.654 fused_ordering(120) 00:13:48.654 fused_ordering(121) 00:13:48.654 fused_ordering(122) 00:13:48.654 fused_ordering(123) 00:13:48.654 fused_ordering(124) 00:13:48.654 fused_ordering(125) 00:13:48.654 fused_ordering(126) 00:13:48.654 fused_ordering(127) 00:13:48.654 fused_ordering(128) 00:13:48.654 fused_ordering(129) 00:13:48.654 fused_ordering(130) 00:13:48.654 fused_ordering(131) 00:13:48.654 fused_ordering(132) 00:13:48.654 fused_ordering(133) 00:13:48.654 fused_ordering(134) 00:13:48.654 fused_ordering(135) 00:13:48.654 fused_ordering(136) 00:13:48.654 fused_ordering(137) 00:13:48.654 fused_ordering(138) 00:13:48.654 fused_ordering(139) 00:13:48.654 fused_ordering(140) 00:13:48.654 fused_ordering(141) 00:13:48.654 fused_ordering(142) 00:13:48.654 fused_ordering(143) 00:13:48.654 fused_ordering(144) 00:13:48.654 fused_ordering(145) 00:13:48.654 fused_ordering(146) 00:13:48.654 fused_ordering(147) 00:13:48.654 fused_ordering(148) 00:13:48.654 fused_ordering(149) 00:13:48.654 fused_ordering(150) 00:13:48.654 fused_ordering(151) 00:13:48.654 fused_ordering(152) 00:13:48.654 fused_ordering(153) 00:13:48.654 fused_ordering(154) 00:13:48.654 fused_ordering(155) 00:13:48.654 fused_ordering(156) 00:13:48.654 fused_ordering(157) 00:13:48.654 fused_ordering(158) 00:13:48.654 fused_ordering(159) 00:13:48.654 fused_ordering(160) 00:13:48.654 fused_ordering(161) 00:13:48.654 fused_ordering(162) 00:13:48.654 fused_ordering(163) 00:13:48.654 fused_ordering(164) 00:13:48.654 fused_ordering(165) 00:13:48.654 fused_ordering(166) 00:13:48.654 fused_ordering(167) 00:13:48.654 fused_ordering(168) 00:13:48.654 fused_ordering(169) 00:13:48.654 fused_ordering(170) 00:13:48.654 fused_ordering(171) 00:13:48.654 fused_ordering(172) 00:13:48.654 fused_ordering(173) 00:13:48.654 fused_ordering(174) 00:13:48.654 fused_ordering(175) 00:13:48.654 fused_ordering(176) 00:13:48.654 fused_ordering(177) 00:13:48.654 fused_ordering(178) 00:13:48.654 fused_ordering(179) 00:13:48.654 fused_ordering(180) 00:13:48.654 fused_ordering(181) 00:13:48.654 fused_ordering(182) 00:13:48.654 fused_ordering(183) 00:13:48.654 fused_ordering(184) 00:13:48.654 fused_ordering(185) 00:13:48.654 fused_ordering(186) 00:13:48.654 fused_ordering(187) 00:13:48.654 fused_ordering(188) 00:13:48.654 fused_ordering(189) 00:13:48.654 fused_ordering(190) 00:13:48.654 fused_ordering(191) 00:13:48.654 fused_ordering(192) 00:13:48.654 fused_ordering(193) 00:13:48.654 fused_ordering(194) 00:13:48.654 fused_ordering(195) 00:13:48.654 fused_ordering(196) 00:13:48.654 fused_ordering(197) 00:13:48.654 fused_ordering(198) 00:13:48.654 fused_ordering(199) 00:13:48.654 fused_ordering(200) 00:13:48.654 fused_ordering(201) 00:13:48.654 fused_ordering(202) 00:13:48.654 fused_ordering(203) 00:13:48.654 fused_ordering(204) 00:13:48.654 fused_ordering(205) 00:13:49.218 fused_ordering(206) 00:13:49.218 fused_ordering(207) 00:13:49.218 fused_ordering(208) 00:13:49.218 fused_ordering(209) 00:13:49.218 fused_ordering(210) 00:13:49.218 fused_ordering(211) 00:13:49.218 fused_ordering(212) 00:13:49.218 fused_ordering(213) 00:13:49.218 fused_ordering(214) 00:13:49.218 fused_ordering(215) 00:13:49.218 fused_ordering(216) 00:13:49.218 fused_ordering(217) 00:13:49.218 fused_ordering(218) 00:13:49.218 fused_ordering(219) 00:13:49.218 fused_ordering(220) 00:13:49.218 fused_ordering(221) 00:13:49.218 fused_ordering(222) 00:13:49.218 fused_ordering(223) 00:13:49.218 fused_ordering(224) 00:13:49.218 fused_ordering(225) 00:13:49.218 fused_ordering(226) 00:13:49.218 fused_ordering(227) 00:13:49.218 fused_ordering(228) 00:13:49.218 fused_ordering(229) 00:13:49.218 fused_ordering(230) 00:13:49.218 fused_ordering(231) 00:13:49.218 fused_ordering(232) 00:13:49.218 fused_ordering(233) 00:13:49.218 fused_ordering(234) 00:13:49.218 fused_ordering(235) 00:13:49.218 fused_ordering(236) 00:13:49.218 fused_ordering(237) 00:13:49.219 fused_ordering(238) 00:13:49.219 fused_ordering(239) 00:13:49.219 fused_ordering(240) 00:13:49.219 fused_ordering(241) 00:13:49.219 fused_ordering(242) 00:13:49.219 fused_ordering(243) 00:13:49.219 fused_ordering(244) 00:13:49.219 fused_ordering(245) 00:13:49.219 fused_ordering(246) 00:13:49.219 fused_ordering(247) 00:13:49.219 fused_ordering(248) 00:13:49.219 fused_ordering(249) 00:13:49.219 fused_ordering(250) 00:13:49.219 fused_ordering(251) 00:13:49.219 fused_ordering(252) 00:13:49.219 fused_ordering(253) 00:13:49.219 fused_ordering(254) 00:13:49.219 fused_ordering(255) 00:13:49.219 fused_ordering(256) 00:13:49.219 fused_ordering(257) 00:13:49.219 fused_ordering(258) 00:13:49.219 fused_ordering(259) 00:13:49.219 fused_ordering(260) 00:13:49.219 fused_ordering(261) 00:13:49.219 fused_ordering(262) 00:13:49.219 fused_ordering(263) 00:13:49.219 fused_ordering(264) 00:13:49.219 fused_ordering(265) 00:13:49.219 fused_ordering(266) 00:13:49.219 fused_ordering(267) 00:13:49.219 fused_ordering(268) 00:13:49.219 fused_ordering(269) 00:13:49.219 fused_ordering(270) 00:13:49.219 fused_ordering(271) 00:13:49.219 fused_ordering(272) 00:13:49.219 fused_ordering(273) 00:13:49.219 fused_ordering(274) 00:13:49.219 fused_ordering(275) 00:13:49.219 fused_ordering(276) 00:13:49.219 fused_ordering(277) 00:13:49.219 fused_ordering(278) 00:13:49.219 fused_ordering(279) 00:13:49.219 fused_ordering(280) 00:13:49.219 fused_ordering(281) 00:13:49.219 fused_ordering(282) 00:13:49.219 fused_ordering(283) 00:13:49.219 fused_ordering(284) 00:13:49.219 fused_ordering(285) 00:13:49.219 fused_ordering(286) 00:13:49.219 fused_ordering(287) 00:13:49.219 fused_ordering(288) 00:13:49.219 fused_ordering(289) 00:13:49.219 fused_ordering(290) 00:13:49.219 fused_ordering(291) 00:13:49.219 fused_ordering(292) 00:13:49.219 fused_ordering(293) 00:13:49.219 fused_ordering(294) 00:13:49.219 fused_ordering(295) 00:13:49.219 fused_ordering(296) 00:13:49.219 fused_ordering(297) 00:13:49.219 fused_ordering(298) 00:13:49.219 fused_ordering(299) 00:13:49.219 fused_ordering(300) 00:13:49.219 fused_ordering(301) 00:13:49.219 fused_ordering(302) 00:13:49.219 fused_ordering(303) 00:13:49.219 fused_ordering(304) 00:13:49.219 fused_ordering(305) 00:13:49.219 fused_ordering(306) 00:13:49.219 fused_ordering(307) 00:13:49.219 fused_ordering(308) 00:13:49.219 fused_ordering(309) 00:13:49.219 fused_ordering(310) 00:13:49.219 fused_ordering(311) 00:13:49.219 fused_ordering(312) 00:13:49.219 fused_ordering(313) 00:13:49.219 fused_ordering(314) 00:13:49.219 fused_ordering(315) 00:13:49.219 fused_ordering(316) 00:13:49.219 fused_ordering(317) 00:13:49.219 fused_ordering(318) 00:13:49.219 fused_ordering(319) 00:13:49.219 fused_ordering(320) 00:13:49.219 fused_ordering(321) 00:13:49.219 fused_ordering(322) 00:13:49.219 fused_ordering(323) 00:13:49.219 fused_ordering(324) 00:13:49.219 fused_ordering(325) 00:13:49.219 fused_ordering(326) 00:13:49.219 fused_ordering(327) 00:13:49.219 fused_ordering(328) 00:13:49.219 fused_ordering(329) 00:13:49.219 fused_ordering(330) 00:13:49.219 fused_ordering(331) 00:13:49.219 fused_ordering(332) 00:13:49.219 fused_ordering(333) 00:13:49.219 fused_ordering(334) 00:13:49.219 fused_ordering(335) 00:13:49.219 fused_ordering(336) 00:13:49.219 fused_ordering(337) 00:13:49.219 fused_ordering(338) 00:13:49.219 fused_ordering(339) 00:13:49.219 fused_ordering(340) 00:13:49.219 fused_ordering(341) 00:13:49.219 fused_ordering(342) 00:13:49.219 fused_ordering(343) 00:13:49.219 fused_ordering(344) 00:13:49.219 fused_ordering(345) 00:13:49.219 fused_ordering(346) 00:13:49.219 fused_ordering(347) 00:13:49.219 fused_ordering(348) 00:13:49.219 fused_ordering(349) 00:13:49.219 fused_ordering(350) 00:13:49.219 fused_ordering(351) 00:13:49.219 fused_ordering(352) 00:13:49.219 fused_ordering(353) 00:13:49.219 fused_ordering(354) 00:13:49.219 fused_ordering(355) 00:13:49.219 fused_ordering(356) 00:13:49.219 fused_ordering(357) 00:13:49.219 fused_ordering(358) 00:13:49.219 fused_ordering(359) 00:13:49.219 fused_ordering(360) 00:13:49.219 fused_ordering(361) 00:13:49.219 fused_ordering(362) 00:13:49.219 fused_ordering(363) 00:13:49.219 fused_ordering(364) 00:13:49.219 fused_ordering(365) 00:13:49.219 fused_ordering(366) 00:13:49.219 fused_ordering(367) 00:13:49.219 fused_ordering(368) 00:13:49.219 fused_ordering(369) 00:13:49.219 fused_ordering(370) 00:13:49.219 fused_ordering(371) 00:13:49.219 fused_ordering(372) 00:13:49.219 fused_ordering(373) 00:13:49.219 fused_ordering(374) 00:13:49.219 fused_ordering(375) 00:13:49.219 fused_ordering(376) 00:13:49.219 fused_ordering(377) 00:13:49.219 fused_ordering(378) 00:13:49.219 fused_ordering(379) 00:13:49.219 fused_ordering(380) 00:13:49.219 fused_ordering(381) 00:13:49.219 fused_ordering(382) 00:13:49.219 fused_ordering(383) 00:13:49.219 fused_ordering(384) 00:13:49.219 fused_ordering(385) 00:13:49.219 fused_ordering(386) 00:13:49.219 fused_ordering(387) 00:13:49.219 fused_ordering(388) 00:13:49.219 fused_ordering(389) 00:13:49.219 fused_ordering(390) 00:13:49.219 fused_ordering(391) 00:13:49.219 fused_ordering(392) 00:13:49.219 fused_ordering(393) 00:13:49.219 fused_ordering(394) 00:13:49.219 fused_ordering(395) 00:13:49.219 fused_ordering(396) 00:13:49.219 fused_ordering(397) 00:13:49.219 fused_ordering(398) 00:13:49.219 fused_ordering(399) 00:13:49.219 fused_ordering(400) 00:13:49.219 fused_ordering(401) 00:13:49.219 fused_ordering(402) 00:13:49.219 fused_ordering(403) 00:13:49.219 fused_ordering(404) 00:13:49.219 fused_ordering(405) 00:13:49.219 fused_ordering(406) 00:13:49.219 fused_ordering(407) 00:13:49.219 fused_ordering(408) 00:13:49.219 fused_ordering(409) 00:13:49.219 fused_ordering(410) 00:13:49.782 fused_ordering(411) 00:13:49.782 fused_ordering(412) 00:13:49.782 fused_ordering(413) 00:13:49.782 fused_ordering(414) 00:13:49.782 fused_ordering(415) 00:13:49.782 fused_ordering(416) 00:13:49.782 fused_ordering(417) 00:13:49.782 fused_ordering(418) 00:13:49.782 fused_ordering(419) 00:13:49.782 fused_ordering(420) 00:13:49.782 fused_ordering(421) 00:13:49.782 fused_ordering(422) 00:13:49.782 fused_ordering(423) 00:13:49.782 fused_ordering(424) 00:13:49.782 fused_ordering(425) 00:13:49.782 fused_ordering(426) 00:13:49.782 fused_ordering(427) 00:13:49.782 fused_ordering(428) 00:13:49.782 fused_ordering(429) 00:13:49.782 fused_ordering(430) 00:13:49.782 fused_ordering(431) 00:13:49.782 fused_ordering(432) 00:13:49.782 fused_ordering(433) 00:13:49.782 fused_ordering(434) 00:13:49.782 fused_ordering(435) 00:13:49.782 fused_ordering(436) 00:13:49.782 fused_ordering(437) 00:13:49.782 fused_ordering(438) 00:13:49.782 fused_ordering(439) 00:13:49.782 fused_ordering(440) 00:13:49.782 fused_ordering(441) 00:13:49.782 fused_ordering(442) 00:13:49.782 fused_ordering(443) 00:13:49.782 fused_ordering(444) 00:13:49.782 fused_ordering(445) 00:13:49.782 fused_ordering(446) 00:13:49.782 fused_ordering(447) 00:13:49.782 fused_ordering(448) 00:13:49.782 fused_ordering(449) 00:13:49.782 fused_ordering(450) 00:13:49.782 fused_ordering(451) 00:13:49.782 fused_ordering(452) 00:13:49.782 fused_ordering(453) 00:13:49.782 fused_ordering(454) 00:13:49.782 fused_ordering(455) 00:13:49.782 fused_ordering(456) 00:13:49.782 fused_ordering(457) 00:13:49.782 fused_ordering(458) 00:13:49.782 fused_ordering(459) 00:13:49.782 fused_ordering(460) 00:13:49.782 fused_ordering(461) 00:13:49.782 fused_ordering(462) 00:13:49.782 fused_ordering(463) 00:13:49.782 fused_ordering(464) 00:13:49.782 fused_ordering(465) 00:13:49.782 fused_ordering(466) 00:13:49.782 fused_ordering(467) 00:13:49.782 fused_ordering(468) 00:13:49.782 fused_ordering(469) 00:13:49.782 fused_ordering(470) 00:13:49.782 fused_ordering(471) 00:13:49.782 fused_ordering(472) 00:13:49.782 fused_ordering(473) 00:13:49.782 fused_ordering(474) 00:13:49.782 fused_ordering(475) 00:13:49.782 fused_ordering(476) 00:13:49.782 fused_ordering(477) 00:13:49.783 fused_ordering(478) 00:13:49.783 fused_ordering(479) 00:13:49.783 fused_ordering(480) 00:13:49.783 fused_ordering(481) 00:13:49.783 fused_ordering(482) 00:13:49.783 fused_ordering(483) 00:13:49.783 fused_ordering(484) 00:13:49.783 fused_ordering(485) 00:13:49.783 fused_ordering(486) 00:13:49.783 fused_ordering(487) 00:13:49.783 fused_ordering(488) 00:13:49.783 fused_ordering(489) 00:13:49.783 fused_ordering(490) 00:13:49.783 fused_ordering(491) 00:13:49.783 fused_ordering(492) 00:13:49.783 fused_ordering(493) 00:13:49.783 fused_ordering(494) 00:13:49.783 fused_ordering(495) 00:13:49.783 fused_ordering(496) 00:13:49.783 fused_ordering(497) 00:13:49.783 fused_ordering(498) 00:13:49.783 fused_ordering(499) 00:13:49.783 fused_ordering(500) 00:13:49.783 fused_ordering(501) 00:13:49.783 fused_ordering(502) 00:13:49.783 fused_ordering(503) 00:13:49.783 fused_ordering(504) 00:13:49.783 fused_ordering(505) 00:13:49.783 fused_ordering(506) 00:13:49.783 fused_ordering(507) 00:13:49.783 fused_ordering(508) 00:13:49.783 fused_ordering(509) 00:13:49.783 fused_ordering(510) 00:13:49.783 fused_ordering(511) 00:13:49.783 fused_ordering(512) 00:13:49.783 fused_ordering(513) 00:13:49.783 fused_ordering(514) 00:13:49.783 fused_ordering(515) 00:13:49.783 fused_ordering(516) 00:13:49.783 fused_ordering(517) 00:13:49.783 fused_ordering(518) 00:13:49.783 fused_ordering(519) 00:13:49.783 fused_ordering(520) 00:13:49.783 fused_ordering(521) 00:13:49.783 fused_ordering(522) 00:13:49.783 fused_ordering(523) 00:13:49.783 fused_ordering(524) 00:13:49.783 fused_ordering(525) 00:13:49.783 fused_ordering(526) 00:13:49.783 fused_ordering(527) 00:13:49.783 fused_ordering(528) 00:13:49.783 fused_ordering(529) 00:13:49.783 fused_ordering(530) 00:13:49.783 fused_ordering(531) 00:13:49.783 fused_ordering(532) 00:13:49.783 fused_ordering(533) 00:13:49.783 fused_ordering(534) 00:13:49.783 fused_ordering(535) 00:13:49.783 fused_ordering(536) 00:13:49.783 fused_ordering(537) 00:13:49.783 fused_ordering(538) 00:13:49.783 fused_ordering(539) 00:13:49.783 fused_ordering(540) 00:13:49.783 fused_ordering(541) 00:13:49.783 fused_ordering(542) 00:13:49.783 fused_ordering(543) 00:13:49.783 fused_ordering(544) 00:13:49.783 fused_ordering(545) 00:13:49.783 fused_ordering(546) 00:13:49.783 fused_ordering(547) 00:13:49.783 fused_ordering(548) 00:13:49.783 fused_ordering(549) 00:13:49.783 fused_ordering(550) 00:13:49.783 fused_ordering(551) 00:13:49.783 fused_ordering(552) 00:13:49.783 fused_ordering(553) 00:13:49.783 fused_ordering(554) 00:13:49.783 fused_ordering(555) 00:13:49.783 fused_ordering(556) 00:13:49.783 fused_ordering(557) 00:13:49.783 fused_ordering(558) 00:13:49.783 fused_ordering(559) 00:13:49.783 fused_ordering(560) 00:13:49.783 fused_ordering(561) 00:13:49.783 fused_ordering(562) 00:13:49.783 fused_ordering(563) 00:13:49.783 fused_ordering(564) 00:13:49.783 fused_ordering(565) 00:13:49.783 fused_ordering(566) 00:13:49.783 fused_ordering(567) 00:13:49.783 fused_ordering(568) 00:13:49.783 fused_ordering(569) 00:13:49.783 fused_ordering(570) 00:13:49.783 fused_ordering(571) 00:13:49.783 fused_ordering(572) 00:13:49.783 fused_ordering(573) 00:13:49.783 fused_ordering(574) 00:13:49.783 fused_ordering(575) 00:13:49.783 fused_ordering(576) 00:13:49.783 fused_ordering(577) 00:13:49.783 fused_ordering(578) 00:13:49.783 fused_ordering(579) 00:13:49.783 fused_ordering(580) 00:13:49.783 fused_ordering(581) 00:13:49.783 fused_ordering(582) 00:13:49.783 fused_ordering(583) 00:13:49.783 fused_ordering(584) 00:13:49.783 fused_ordering(585) 00:13:49.783 fused_ordering(586) 00:13:49.783 fused_ordering(587) 00:13:49.783 fused_ordering(588) 00:13:49.783 fused_ordering(589) 00:13:49.783 fused_ordering(590) 00:13:49.783 fused_ordering(591) 00:13:49.783 fused_ordering(592) 00:13:49.783 fused_ordering(593) 00:13:49.783 fused_ordering(594) 00:13:49.783 fused_ordering(595) 00:13:49.783 fused_ordering(596) 00:13:49.783 fused_ordering(597) 00:13:49.783 fused_ordering(598) 00:13:49.783 fused_ordering(599) 00:13:49.783 fused_ordering(600) 00:13:49.783 fused_ordering(601) 00:13:49.783 fused_ordering(602) 00:13:49.783 fused_ordering(603) 00:13:49.783 fused_ordering(604) 00:13:49.783 fused_ordering(605) 00:13:49.783 fused_ordering(606) 00:13:49.783 fused_ordering(607) 00:13:49.783 fused_ordering(608) 00:13:49.783 fused_ordering(609) 00:13:49.783 fused_ordering(610) 00:13:49.783 fused_ordering(611) 00:13:49.783 fused_ordering(612) 00:13:49.783 fused_ordering(613) 00:13:49.783 fused_ordering(614) 00:13:49.783 fused_ordering(615) 00:13:50.347 fused_ordering(616) 00:13:50.347 fused_ordering(617) 00:13:50.347 fused_ordering(618) 00:13:50.347 fused_ordering(619) 00:13:50.347 fused_ordering(620) 00:13:50.347 fused_ordering(621) 00:13:50.347 fused_ordering(622) 00:13:50.347 fused_ordering(623) 00:13:50.347 fused_ordering(624) 00:13:50.347 fused_ordering(625) 00:13:50.347 fused_ordering(626) 00:13:50.347 fused_ordering(627) 00:13:50.347 fused_ordering(628) 00:13:50.347 fused_ordering(629) 00:13:50.347 fused_ordering(630) 00:13:50.347 fused_ordering(631) 00:13:50.347 fused_ordering(632) 00:13:50.347 fused_ordering(633) 00:13:50.347 fused_ordering(634) 00:13:50.347 fused_ordering(635) 00:13:50.347 fused_ordering(636) 00:13:50.347 fused_ordering(637) 00:13:50.347 fused_ordering(638) 00:13:50.347 fused_ordering(639) 00:13:50.347 fused_ordering(640) 00:13:50.347 fused_ordering(641) 00:13:50.347 fused_ordering(642) 00:13:50.347 fused_ordering(643) 00:13:50.347 fused_ordering(644) 00:13:50.347 fused_ordering(645) 00:13:50.347 fused_ordering(646) 00:13:50.347 fused_ordering(647) 00:13:50.347 fused_ordering(648) 00:13:50.347 fused_ordering(649) 00:13:50.347 fused_ordering(650) 00:13:50.347 fused_ordering(651) 00:13:50.347 fused_ordering(652) 00:13:50.347 fused_ordering(653) 00:13:50.347 fused_ordering(654) 00:13:50.347 fused_ordering(655) 00:13:50.347 fused_ordering(656) 00:13:50.347 fused_ordering(657) 00:13:50.347 fused_ordering(658) 00:13:50.347 fused_ordering(659) 00:13:50.347 fused_ordering(660) 00:13:50.347 fused_ordering(661) 00:13:50.347 fused_ordering(662) 00:13:50.347 fused_ordering(663) 00:13:50.347 fused_ordering(664) 00:13:50.347 fused_ordering(665) 00:13:50.347 fused_ordering(666) 00:13:50.347 fused_ordering(667) 00:13:50.347 fused_ordering(668) 00:13:50.347 fused_ordering(669) 00:13:50.347 fused_ordering(670) 00:13:50.347 fused_ordering(671) 00:13:50.347 fused_ordering(672) 00:13:50.347 fused_ordering(673) 00:13:50.347 fused_ordering(674) 00:13:50.347 fused_ordering(675) 00:13:50.347 fused_ordering(676) 00:13:50.347 fused_ordering(677) 00:13:50.347 fused_ordering(678) 00:13:50.347 fused_ordering(679) 00:13:50.347 fused_ordering(680) 00:13:50.347 fused_ordering(681) 00:13:50.347 fused_ordering(682) 00:13:50.347 fused_ordering(683) 00:13:50.347 fused_ordering(684) 00:13:50.347 fused_ordering(685) 00:13:50.347 fused_ordering(686) 00:13:50.347 fused_ordering(687) 00:13:50.347 fused_ordering(688) 00:13:50.347 fused_ordering(689) 00:13:50.347 fused_ordering(690) 00:13:50.347 fused_ordering(691) 00:13:50.347 fused_ordering(692) 00:13:50.347 fused_ordering(693) 00:13:50.347 fused_ordering(694) 00:13:50.347 fused_ordering(695) 00:13:50.347 fused_ordering(696) 00:13:50.347 fused_ordering(697) 00:13:50.347 fused_ordering(698) 00:13:50.347 fused_ordering(699) 00:13:50.347 fused_ordering(700) 00:13:50.347 fused_ordering(701) 00:13:50.347 fused_ordering(702) 00:13:50.347 fused_ordering(703) 00:13:50.347 fused_ordering(704) 00:13:50.347 fused_ordering(705) 00:13:50.347 fused_ordering(706) 00:13:50.347 fused_ordering(707) 00:13:50.347 fused_ordering(708) 00:13:50.347 fused_ordering(709) 00:13:50.347 fused_ordering(710) 00:13:50.347 fused_ordering(711) 00:13:50.347 fused_ordering(712) 00:13:50.347 fused_ordering(713) 00:13:50.347 fused_ordering(714) 00:13:50.347 fused_ordering(715) 00:13:50.347 fused_ordering(716) 00:13:50.347 fused_ordering(717) 00:13:50.347 fused_ordering(718) 00:13:50.347 fused_ordering(719) 00:13:50.347 fused_ordering(720) 00:13:50.347 fused_ordering(721) 00:13:50.347 fused_ordering(722) 00:13:50.347 fused_ordering(723) 00:13:50.347 fused_ordering(724) 00:13:50.347 fused_ordering(725) 00:13:50.347 fused_ordering(726) 00:13:50.347 fused_ordering(727) 00:13:50.347 fused_ordering(728) 00:13:50.347 fused_ordering(729) 00:13:50.347 fused_ordering(730) 00:13:50.347 fused_ordering(731) 00:13:50.347 fused_ordering(732) 00:13:50.347 fused_ordering(733) 00:13:50.347 fused_ordering(734) 00:13:50.347 fused_ordering(735) 00:13:50.347 fused_ordering(736) 00:13:50.347 fused_ordering(737) 00:13:50.347 fused_ordering(738) 00:13:50.347 fused_ordering(739) 00:13:50.347 fused_ordering(740) 00:13:50.347 fused_ordering(741) 00:13:50.347 fused_ordering(742) 00:13:50.347 fused_ordering(743) 00:13:50.347 fused_ordering(744) 00:13:50.347 fused_ordering(745) 00:13:50.347 fused_ordering(746) 00:13:50.347 fused_ordering(747) 00:13:50.347 fused_ordering(748) 00:13:50.347 fused_ordering(749) 00:13:50.347 fused_ordering(750) 00:13:50.347 fused_ordering(751) 00:13:50.347 fused_ordering(752) 00:13:50.347 fused_ordering(753) 00:13:50.347 fused_ordering(754) 00:13:50.347 fused_ordering(755) 00:13:50.347 fused_ordering(756) 00:13:50.347 fused_ordering(757) 00:13:50.347 fused_ordering(758) 00:13:50.347 fused_ordering(759) 00:13:50.347 fused_ordering(760) 00:13:50.347 fused_ordering(761) 00:13:50.347 fused_ordering(762) 00:13:50.347 fused_ordering(763) 00:13:50.347 fused_ordering(764) 00:13:50.348 fused_ordering(765) 00:13:50.348 fused_ordering(766) 00:13:50.348 fused_ordering(767) 00:13:50.348 fused_ordering(768) 00:13:50.348 fused_ordering(769) 00:13:50.348 fused_ordering(770) 00:13:50.348 fused_ordering(771) 00:13:50.348 fused_ordering(772) 00:13:50.348 fused_ordering(773) 00:13:50.348 fused_ordering(774) 00:13:50.348 fused_ordering(775) 00:13:50.348 fused_ordering(776) 00:13:50.348 fused_ordering(777) 00:13:50.348 fused_ordering(778) 00:13:50.348 fused_ordering(779) 00:13:50.348 fused_ordering(780) 00:13:50.348 fused_ordering(781) 00:13:50.348 fused_ordering(782) 00:13:50.348 fused_ordering(783) 00:13:50.348 fused_ordering(784) 00:13:50.348 fused_ordering(785) 00:13:50.348 fused_ordering(786) 00:13:50.348 fused_ordering(787) 00:13:50.348 fused_ordering(788) 00:13:50.348 fused_ordering(789) 00:13:50.348 fused_ordering(790) 00:13:50.348 fused_ordering(791) 00:13:50.348 fused_ordering(792) 00:13:50.348 fused_ordering(793) 00:13:50.348 fused_ordering(794) 00:13:50.348 fused_ordering(795) 00:13:50.348 fused_ordering(796) 00:13:50.348 fused_ordering(797) 00:13:50.348 fused_ordering(798) 00:13:50.348 fused_ordering(799) 00:13:50.348 fused_ordering(800) 00:13:50.348 fused_ordering(801) 00:13:50.348 fused_ordering(802) 00:13:50.348 fused_ordering(803) 00:13:50.348 fused_ordering(804) 00:13:50.348 fused_ordering(805) 00:13:50.348 fused_ordering(806) 00:13:50.348 fused_ordering(807) 00:13:50.348 fused_ordering(808) 00:13:50.348 fused_ordering(809) 00:13:50.348 fused_ordering(810) 00:13:50.348 fused_ordering(811) 00:13:50.348 fused_ordering(812) 00:13:50.348 fused_ordering(813) 00:13:50.348 fused_ordering(814) 00:13:50.348 fused_ordering(815) 00:13:50.348 fused_ordering(816) 00:13:50.348 fused_ordering(817) 00:13:50.348 fused_ordering(818) 00:13:50.348 fused_ordering(819) 00:13:50.348 fused_ordering(820) 00:13:51.279 fused_ordering(821) 00:13:51.279 fused_ordering(822) 00:13:51.279 fused_ordering(823) 00:13:51.279 fused_ordering(824) 00:13:51.279 fused_ordering(825) 00:13:51.279 fused_ordering(826) 00:13:51.279 fused_ordering(827) 00:13:51.279 fused_ordering(828) 00:13:51.279 fused_ordering(829) 00:13:51.279 fused_ordering(830) 00:13:51.279 fused_ordering(831) 00:13:51.279 fused_ordering(832) 00:13:51.279 fused_ordering(833) 00:13:51.279 fused_ordering(834) 00:13:51.279 fused_ordering(835) 00:13:51.279 fused_ordering(836) 00:13:51.279 fused_ordering(837) 00:13:51.279 fused_ordering(838) 00:13:51.279 fused_ordering(839) 00:13:51.279 fused_ordering(840) 00:13:51.279 fused_ordering(841) 00:13:51.279 fused_ordering(842) 00:13:51.279 fused_ordering(843) 00:13:51.279 fused_ordering(844) 00:13:51.279 fused_ordering(845) 00:13:51.279 fused_ordering(846) 00:13:51.279 fused_ordering(847) 00:13:51.279 fused_ordering(848) 00:13:51.279 fused_ordering(849) 00:13:51.279 fused_ordering(850) 00:13:51.279 fused_ordering(851) 00:13:51.279 fused_ordering(852) 00:13:51.279 fused_ordering(853) 00:13:51.279 fused_ordering(854) 00:13:51.279 fused_ordering(855) 00:13:51.279 fused_ordering(856) 00:13:51.279 fused_ordering(857) 00:13:51.279 fused_ordering(858) 00:13:51.279 fused_ordering(859) 00:13:51.279 fused_ordering(860) 00:13:51.279 fused_ordering(861) 00:13:51.279 fused_ordering(862) 00:13:51.279 fused_ordering(863) 00:13:51.279 fused_ordering(864) 00:13:51.279 fused_ordering(865) 00:13:51.279 fused_ordering(866) 00:13:51.279 fused_ordering(867) 00:13:51.279 fused_ordering(868) 00:13:51.279 fused_ordering(869) 00:13:51.279 fused_ordering(870) 00:13:51.279 fused_ordering(871) 00:13:51.279 fused_ordering(872) 00:13:51.279 fused_ordering(873) 00:13:51.279 fused_ordering(874) 00:13:51.279 fused_ordering(875) 00:13:51.279 fused_ordering(876) 00:13:51.279 fused_ordering(877) 00:13:51.279 fused_ordering(878) 00:13:51.279 fused_ordering(879) 00:13:51.279 fused_ordering(880) 00:13:51.279 fused_ordering(881) 00:13:51.279 fused_ordering(882) 00:13:51.279 fused_ordering(883) 00:13:51.279 fused_ordering(884) 00:13:51.279 fused_ordering(885) 00:13:51.279 fused_ordering(886) 00:13:51.279 fused_ordering(887) 00:13:51.279 fused_ordering(888) 00:13:51.279 fused_ordering(889) 00:13:51.279 fused_ordering(890) 00:13:51.279 fused_ordering(891) 00:13:51.279 fused_ordering(892) 00:13:51.279 fused_ordering(893) 00:13:51.279 fused_ordering(894) 00:13:51.279 fused_ordering(895) 00:13:51.279 fused_ordering(896) 00:13:51.279 fused_ordering(897) 00:13:51.279 fused_ordering(898) 00:13:51.279 fused_ordering(899) 00:13:51.279 fused_ordering(900) 00:13:51.279 fused_ordering(901) 00:13:51.279 fused_ordering(902) 00:13:51.279 fused_ordering(903) 00:13:51.279 fused_ordering(904) 00:13:51.279 fused_ordering(905) 00:13:51.279 fused_ordering(906) 00:13:51.279 fused_ordering(907) 00:13:51.279 fused_ordering(908) 00:13:51.279 fused_ordering(909) 00:13:51.279 fused_ordering(910) 00:13:51.279 fused_ordering(911) 00:13:51.279 fused_ordering(912) 00:13:51.279 fused_ordering(913) 00:13:51.279 fused_ordering(914) 00:13:51.279 fused_ordering(915) 00:13:51.279 fused_ordering(916) 00:13:51.279 fused_ordering(917) 00:13:51.279 fused_ordering(918) 00:13:51.279 fused_ordering(919) 00:13:51.279 fused_ordering(920) 00:13:51.279 fused_ordering(921) 00:13:51.279 fused_ordering(922) 00:13:51.279 fused_ordering(923) 00:13:51.279 fused_ordering(924) 00:13:51.279 fused_ordering(925) 00:13:51.279 fused_ordering(926) 00:13:51.279 fused_ordering(927) 00:13:51.279 fused_ordering(928) 00:13:51.279 fused_ordering(929) 00:13:51.279 fused_ordering(930) 00:13:51.279 fused_ordering(931) 00:13:51.279 fused_ordering(932) 00:13:51.279 fused_ordering(933) 00:13:51.279 fused_ordering(934) 00:13:51.279 fused_ordering(935) 00:13:51.279 fused_ordering(936) 00:13:51.279 fused_ordering(937) 00:13:51.279 fused_ordering(938) 00:13:51.279 fused_ordering(939) 00:13:51.279 fused_ordering(940) 00:13:51.279 fused_ordering(941) 00:13:51.279 fused_ordering(942) 00:13:51.279 fused_ordering(943) 00:13:51.279 fused_ordering(944) 00:13:51.279 fused_ordering(945) 00:13:51.279 fused_ordering(946) 00:13:51.279 fused_ordering(947) 00:13:51.279 fused_ordering(948) 00:13:51.279 fused_ordering(949) 00:13:51.279 fused_ordering(950) 00:13:51.279 fused_ordering(951) 00:13:51.279 fused_ordering(952) 00:13:51.279 fused_ordering(953) 00:13:51.279 fused_ordering(954) 00:13:51.279 fused_ordering(955) 00:13:51.279 fused_ordering(956) 00:13:51.279 fused_ordering(957) 00:13:51.279 fused_ordering(958) 00:13:51.279 fused_ordering(959) 00:13:51.279 fused_ordering(960) 00:13:51.279 fused_ordering(961) 00:13:51.279 fused_ordering(962) 00:13:51.279 fused_ordering(963) 00:13:51.279 fused_ordering(964) 00:13:51.279 fused_ordering(965) 00:13:51.279 fused_ordering(966) 00:13:51.279 fused_ordering(967) 00:13:51.279 fused_ordering(968) 00:13:51.279 fused_ordering(969) 00:13:51.279 fused_ordering(970) 00:13:51.279 fused_ordering(971) 00:13:51.279 fused_ordering(972) 00:13:51.279 fused_ordering(973) 00:13:51.279 fused_ordering(974) 00:13:51.279 fused_ordering(975) 00:13:51.279 fused_ordering(976) 00:13:51.279 fused_ordering(977) 00:13:51.279 fused_ordering(978) 00:13:51.279 fused_ordering(979) 00:13:51.279 fused_ordering(980) 00:13:51.279 fused_ordering(981) 00:13:51.279 fused_ordering(982) 00:13:51.279 fused_ordering(983) 00:13:51.279 fused_ordering(984) 00:13:51.279 fused_ordering(985) 00:13:51.279 fused_ordering(986) 00:13:51.279 fused_ordering(987) 00:13:51.279 fused_ordering(988) 00:13:51.279 fused_ordering(989) 00:13:51.279 fused_ordering(990) 00:13:51.279 fused_ordering(991) 00:13:51.279 fused_ordering(992) 00:13:51.279 fused_ordering(993) 00:13:51.279 fused_ordering(994) 00:13:51.279 fused_ordering(995) 00:13:51.279 fused_ordering(996) 00:13:51.279 fused_ordering(997) 00:13:51.279 fused_ordering(998) 00:13:51.279 fused_ordering(999) 00:13:51.279 fused_ordering(1000) 00:13:51.279 fused_ordering(1001) 00:13:51.279 fused_ordering(1002) 00:13:51.279 fused_ordering(1003) 00:13:51.279 fused_ordering(1004) 00:13:51.279 fused_ordering(1005) 00:13:51.280 fused_ordering(1006) 00:13:51.280 fused_ordering(1007) 00:13:51.280 fused_ordering(1008) 00:13:51.280 fused_ordering(1009) 00:13:51.280 fused_ordering(1010) 00:13:51.280 fused_ordering(1011) 00:13:51.280 fused_ordering(1012) 00:13:51.280 fused_ordering(1013) 00:13:51.280 fused_ordering(1014) 00:13:51.280 fused_ordering(1015) 00:13:51.280 fused_ordering(1016) 00:13:51.280 fused_ordering(1017) 00:13:51.280 fused_ordering(1018) 00:13:51.280 fused_ordering(1019) 00:13:51.280 fused_ordering(1020) 00:13:51.280 fused_ordering(1021) 00:13:51.280 fused_ordering(1022) 00:13:51.280 fused_ordering(1023) 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:51.280 rmmod nvme_tcp 00:13:51.280 rmmod nvme_fabrics 00:13:51.280 rmmod nvme_keyring 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1907015 ']' 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1907015 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1907015 ']' 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1907015 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1907015 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1907015' 00:13:51.280 killing process with pid 1907015 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1907015 00:13:51.280 08:00:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1907015 00:13:51.537 08:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:51.537 08:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:51.537 08:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:51.537 08:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:51.537 08:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:51.537 08:00:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:51.537 08:00:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:51.537 08:00:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.435 08:00:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:53.435 00:13:53.435 real 0m8.001s 00:13:53.435 user 0m5.864s 00:13:53.435 sys 0m3.442s 00:13:53.435 08:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:53.435 08:00:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:53.435 ************************************ 00:13:53.435 END TEST nvmf_fused_ordering 00:13:53.435 ************************************ 00:13:53.435 08:00:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:53.435 08:00:45 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:53.435 08:00:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:53.435 08:00:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:53.435 08:00:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:53.435 ************************************ 00:13:53.435 START TEST nvmf_delete_subsystem 00:13:53.435 ************************************ 00:13:53.435 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:53.692 * Looking for test storage... 00:13:53.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:53.692 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:53.693 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:53.693 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:53.693 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:53.693 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:53.693 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:53.693 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:53.693 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.693 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:53.693 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.693 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:53.693 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:53.693 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:13:53.693 08:00:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:55.590 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:55.590 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:13:55.590 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:55.590 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:55.590 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:55.590 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:55.590 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:55.590 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:13:55.590 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:55.590 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:13:55.590 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:13:55.590 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:13:55.590 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:13:55.590 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:13:55.590 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:55.591 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:55.591 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:55.591 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:55.591 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:55.591 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:55.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:55.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:13:55.849 00:13:55.849 --- 10.0.0.2 ping statistics --- 00:13:55.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.849 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:55.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:55.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:13:55.849 00:13:55.849 --- 10.0.0.1 ping statistics --- 00:13:55.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:55.849 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1909364 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1909364 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1909364 ']' 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:55.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:55.849 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:55.849 [2024-07-13 08:00:47.465738] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:13:55.849 [2024-07-13 08:00:47.465809] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:55.849 EAL: No free 2048 kB hugepages reported on node 1 00:13:55.849 [2024-07-13 08:00:47.535621] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:56.106 [2024-07-13 08:00:47.629770] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:56.106 [2024-07-13 08:00:47.629833] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:56.106 [2024-07-13 08:00:47.629861] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:56.106 [2024-07-13 08:00:47.629883] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:56.106 [2024-07-13 08:00:47.629895] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:56.106 [2024-07-13 08:00:47.633890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:56.106 [2024-07-13 08:00:47.633902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:56.106 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:56.106 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:13:56.106 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:56.106 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:56.106 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:56.106 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.106 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:56.106 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.106 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:56.107 [2024-07-13 08:00:47.774395] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:56.107 [2024-07-13 08:00:47.790570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:56.107 NULL1 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:56.107 Delay0 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1909506 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:56.107 08:00:47 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:56.364 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.364 [2024-07-13 08:00:47.865316] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:58.258 08:00:49 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:58.258 08:00:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:58.258 08:00:49 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:13:58.515 Write completed with error (sct=0, sc=8) 00:13:58.515 starting I/O failed: -6 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 Write completed with error (sct=0, sc=8) 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 Write completed with error (sct=0, sc=8) 00:13:58.515 starting I/O failed: -6 00:13:58.515 Write completed with error (sct=0, sc=8) 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 starting I/O failed: -6 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 Write completed with error (sct=0, sc=8) 00:13:58.515 starting I/O failed: -6 00:13:58.515 Write completed with error (sct=0, sc=8) 00:13:58.515 Write completed with error (sct=0, sc=8) 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 starting I/O failed: -6 00:13:58.515 Write completed with error (sct=0, sc=8) 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 Write completed with error (sct=0, sc=8) 00:13:58.515 Write completed with error (sct=0, sc=8) 00:13:58.515 starting I/O failed: -6 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 Write completed with error (sct=0, sc=8) 00:13:58.515 Write completed with error (sct=0, sc=8) 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 starting I/O failed: -6 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 Write completed with error (sct=0, sc=8) 00:13:58.515 starting I/O failed: -6 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 starting I/O failed: -6 00:13:58.515 Write completed with error (sct=0, sc=8) 00:13:58.515 Write completed with error (sct=0, sc=8) 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 Write completed with error (sct=0, sc=8) 00:13:58.515 starting I/O failed: -6 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 Write completed with error (sct=0, sc=8) 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.515 starting I/O failed: -6 00:13:58.515 Write completed with error (sct=0, sc=8) 00:13:58.515 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 [2024-07-13 08:00:49.996693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbf10000c00 is same with the state(5) to be set 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 starting I/O failed: -6 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Write completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:58.516 Read completed with error (sct=0, sc=8) 00:13:59.447 [2024-07-13 08:00:50.965433] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1348a30 is same with the state(5) to be set 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 [2024-07-13 08:00:50.998580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133b450 is same with the state(5) to be set 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 [2024-07-13 08:00:50.998882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x133ae30 is same with the state(5) to be set 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 [2024-07-13 08:00:50.999092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbf1000d600 is same with the state(5) to be set 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Read completed with error (sct=0, sc=8) 00:13:59.447 Write completed with error (sct=0, sc=8) 00:13:59.447 [2024-07-13 08:00:51.000064] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fbf1000cfe0 is same with the state(5) to be set 00:13:59.447 Initializing NVMe Controllers 00:13:59.447 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:59.447 Controller IO queue size 128, less than required. 00:13:59.447 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:59.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:59.447 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:59.447 Initialization complete. Launching workers. 00:13:59.447 ======================================================== 00:13:59.447 Latency(us) 00:13:59.448 Device Information : IOPS MiB/s Average min max 00:13:59.448 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.62 0.09 901879.69 657.33 1012156.51 00:13:59.448 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 170.74 0.08 892425.18 611.92 1014005.23 00:13:59.448 ======================================================== 00:13:59.448 Total : 358.36 0.17 897375.05 611.92 1014005.23 00:13:59.448 00:13:59.448 [2024-07-13 08:00:51.000631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1348a30 (9): Bad file descriptor 00:13:59.448 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:59.448 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:59.448 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:13:59.448 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1909506 00:13:59.448 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:00.011 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:00.011 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1909506 00:14:00.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1909506) - No such process 00:14:00.011 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1909506 00:14:00.011 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:00.011 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1909506 00:14:00.011 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:00.011 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1909506 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:00.012 [2024-07-13 08:00:51.523667] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1909911 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1909911 00:14:00.012 08:00:51 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:00.012 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.012 [2024-07-13 08:00:51.587897] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:00.574 08:00:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:00.574 08:00:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1909911 00:14:00.574 08:00:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:00.830 08:00:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:00.830 08:00:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1909911 00:14:00.830 08:00:52 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:01.391 08:00:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:01.391 08:00:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1909911 00:14:01.391 08:00:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:01.955 08:00:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:01.955 08:00:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1909911 00:14:01.955 08:00:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:02.519 08:00:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:02.519 08:00:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1909911 00:14:02.519 08:00:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:03.083 08:00:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:03.083 08:00:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1909911 00:14:03.083 08:00:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:03.083 Initializing NVMe Controllers 00:14:03.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:03.083 Controller IO queue size 128, less than required. 00:14:03.083 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:03.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:03.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:03.083 Initialization complete. Launching workers. 00:14:03.083 ======================================================== 00:14:03.083 Latency(us) 00:14:03.083 Device Information : IOPS MiB/s Average min max 00:14:03.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003608.87 1000208.05 1042246.23 00:14:03.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005194.43 1000285.07 1013258.60 00:14:03.083 ======================================================== 00:14:03.083 Total : 256.00 0.12 1004401.65 1000208.05 1042246.23 00:14:03.083 00:14:03.339 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:03.339 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1909911 00:14:03.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1909911) - No such process 00:14:03.339 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1909911 00:14:03.339 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:03.339 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:03.339 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:03.339 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:03.339 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:03.339 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:03.339 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:03.339 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:03.339 rmmod nvme_tcp 00:14:03.596 rmmod nvme_fabrics 00:14:03.596 rmmod nvme_keyring 00:14:03.596 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:03.596 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:03.596 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:03.596 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1909364 ']' 00:14:03.596 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1909364 00:14:03.596 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1909364 ']' 00:14:03.596 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1909364 00:14:03.596 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:14:03.596 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:03.596 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1909364 00:14:03.596 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:03.596 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:03.596 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1909364' 00:14:03.596 killing process with pid 1909364 00:14:03.596 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1909364 00:14:03.596 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1909364 00:14:03.854 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:03.854 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:03.854 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:03.854 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:03.854 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:03.854 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:03.854 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:03.854 08:00:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.752 08:00:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:05.752 00:14:05.752 real 0m12.291s 00:14:05.752 user 0m27.669s 00:14:05.752 sys 0m3.012s 00:14:05.752 08:00:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:05.752 08:00:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:05.752 ************************************ 00:14:05.752 END TEST nvmf_delete_subsystem 00:14:05.752 ************************************ 00:14:05.752 08:00:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:05.752 08:00:57 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:05.752 08:00:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:05.752 08:00:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:05.752 08:00:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:05.752 ************************************ 00:14:05.752 START TEST nvmf_ns_masking 00:14:05.752 ************************************ 00:14:05.752 08:00:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:06.009 * Looking for test storage... 00:14:06.009 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=83c7bbad-7f30-487c-b287-4062388c79d8 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=069c6388-3ef3-48c7-a104-3e8358c1f480 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=3e9704ae-fbec-4140-b089-920ff96cc7b3 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:06.009 08:00:57 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:07.922 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:07.922 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:07.922 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:07.922 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:07.923 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:07.923 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:07.923 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:07.923 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:07.923 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:08.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:08.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:14:08.180 00:14:08.180 --- 10.0.0.2 ping statistics --- 00:14:08.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.180 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:08.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:08.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:14:08.180 00:14:08.180 --- 10.0.0.1 ping statistics --- 00:14:08.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:08.180 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1912256 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1912256 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1912256 ']' 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:08.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:08.180 08:00:59 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:08.180 [2024-07-13 08:00:59.812402] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:08.180 [2024-07-13 08:00:59.812495] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:08.180 EAL: No free 2048 kB hugepages reported on node 1 00:14:08.180 [2024-07-13 08:00:59.879937] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.437 [2024-07-13 08:00:59.967789] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:08.437 [2024-07-13 08:00:59.967853] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:08.437 [2024-07-13 08:00:59.967896] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:08.437 [2024-07-13 08:00:59.967919] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:08.437 [2024-07-13 08:00:59.967943] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:08.437 [2024-07-13 08:00:59.967971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.437 08:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:08.437 08:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:08.437 08:01:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:08.437 08:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:08.437 08:01:00 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:08.437 08:01:00 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:08.437 08:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:08.693 [2024-07-13 08:01:00.376060] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:08.693 08:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:08.693 08:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:08.693 08:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:08.977 Malloc1 00:14:08.977 08:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:09.540 Malloc2 00:14:09.540 08:01:00 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:09.540 08:01:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:09.797 08:01:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:10.054 [2024-07-13 08:01:01.761774] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:10.054 08:01:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:10.054 08:01:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3e9704ae-fbec-4140-b089-920ff96cc7b3 -a 10.0.0.2 -s 4420 -i 4 00:14:10.311 08:01:01 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:10.311 08:01:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:10.311 08:01:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:10.311 08:01:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:10.311 08:01:01 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:12.834 08:01:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:12.834 08:01:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:12.834 08:01:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:12.834 08:01:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:12.834 08:01:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:12.834 08:01:03 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:12.834 08:01:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:12.834 08:01:03 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:12.834 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:12.834 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:12.834 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:12.834 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.834 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:12.834 [ 0]:0x1 00:14:12.834 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:12.834 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.834 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=258f5e063e3e4a5cae6228ae5a754ba9 00:14:12.834 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 258f5e063e3e4a5cae6228ae5a754ba9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.834 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:12.834 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:12.834 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.834 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:12.834 [ 0]:0x1 00:14:12.834 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:12.834 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.834 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=258f5e063e3e4a5cae6228ae5a754ba9 00:14:12.834 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 258f5e063e3e4a5cae6228ae5a754ba9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.835 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:12.835 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:12.835 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:12.835 [ 1]:0x2 00:14:12.835 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:12.835 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:12.835 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07640d1a92594123b4272d7d40d5e5b6 00:14:12.835 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07640d1a92594123b4272d7d40d5e5b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:12.835 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:12.835 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:12.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.835 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.092 08:01:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:13.656 08:01:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:13.656 08:01:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3e9704ae-fbec-4140-b089-920ff96cc7b3 -a 10.0.0.2 -s 4420 -i 4 00:14:13.656 08:01:05 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:13.656 08:01:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:13.656 08:01:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:13.656 08:01:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:13.656 08:01:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:13.656 08:01:05 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:15.551 08:01:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:15.551 08:01:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:15.551 08:01:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:15.551 08:01:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:15.551 08:01:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:15.551 08:01:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:15.551 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:15.551 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:15.807 [ 0]:0x2 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07640d1a92594123b4272d7d40d5e5b6 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07640d1a92594123b4272d7d40d5e5b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:15.807 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:16.064 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:16.064 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:16.064 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:16.064 [ 0]:0x1 00:14:16.064 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:16.064 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:16.320 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=258f5e063e3e4a5cae6228ae5a754ba9 00:14:16.321 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 258f5e063e3e4a5cae6228ae5a754ba9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:16.321 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:16.321 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:16.321 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:16.321 [ 1]:0x2 00:14:16.321 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:16.321 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:16.321 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07640d1a92594123b4272d7d40d5e5b6 00:14:16.321 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07640d1a92594123b4272d7d40d5e5b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:16.321 08:01:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:16.587 [ 0]:0x2 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07640d1a92594123b4272d7d40d5e5b6 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07640d1a92594123b4272d7d40d5e5b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:16.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:16.587 08:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:16.850 08:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:16.850 08:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 3e9704ae-fbec-4140-b089-920ff96cc7b3 -a 10.0.0.2 -s 4420 -i 4 00:14:17.107 08:01:08 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:17.107 08:01:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:17.107 08:01:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:17.107 08:01:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:17.107 08:01:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:17.107 08:01:08 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:19.629 [ 0]:0x1 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=258f5e063e3e4a5cae6228ae5a754ba9 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 258f5e063e3e4a5cae6228ae5a754ba9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:19.629 [ 1]:0x2 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07640d1a92594123b4272d7d40d5e5b6 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07640d1a92594123b4272d7d40d5e5b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.629 08:01:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:19.629 [ 0]:0x2 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07640d1a92594123b4272d7d40d5e5b6 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07640d1a92594123b4272d7d40d5e5b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.629 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:19.630 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:19.630 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:19.630 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:19.630 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:19.630 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:19.630 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:19.630 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:19.630 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:19.630 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:19.630 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:19.630 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:19.887 [2024-07-13 08:01:11.491481] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:19.887 request: 00:14:19.887 { 00:14:19.887 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:19.887 "nsid": 2, 00:14:19.887 "host": "nqn.2016-06.io.spdk:host1", 00:14:19.887 "method": "nvmf_ns_remove_host", 00:14:19.887 "req_id": 1 00:14:19.887 } 00:14:19.887 Got JSON-RPC error response 00:14:19.887 response: 00:14:19.887 { 00:14:19.887 "code": -32602, 00:14:19.887 "message": "Invalid parameters" 00:14:19.887 } 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:19.887 [ 0]:0x2 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:19.887 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:20.144 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=07640d1a92594123b4272d7d40d5e5b6 00:14:20.144 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 07640d1a92594123b4272d7d40d5e5b6 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:20.144 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:20.144 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:20.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.144 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1913874 00:14:20.144 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:20.144 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:20.144 08:01:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1913874 /var/tmp/host.sock 00:14:20.144 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1913874 ']' 00:14:20.144 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:20.144 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:20.144 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:20.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:20.144 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:20.144 08:01:11 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:20.144 [2024-07-13 08:01:11.838841] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:20.144 [2024-07-13 08:01:11.838946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1913874 ] 00:14:20.144 EAL: No free 2048 kB hugepages reported on node 1 00:14:20.402 [2024-07-13 08:01:11.902616] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.402 [2024-07-13 08:01:11.995710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:20.659 08:01:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:20.659 08:01:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:20.659 08:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.916 08:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:21.173 08:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 83c7bbad-7f30-487c-b287-4062388c79d8 00:14:21.173 08:01:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:21.173 08:01:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 83C7BBAD7F30487CB2874062388C79D8 -i 00:14:21.430 08:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 069c6388-3ef3-48c7-a104-3e8358c1f480 00:14:21.430 08:01:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:21.430 08:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 069C63883EF348C7A1043E8358C1F480 -i 00:14:21.687 08:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:21.945 08:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:22.201 08:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:22.201 08:01:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:22.766 nvme0n1 00:14:22.766 08:01:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:22.766 08:01:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:23.022 nvme1n2 00:14:23.022 08:01:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:23.022 08:01:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:23.022 08:01:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:23.022 08:01:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:23.022 08:01:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:23.279 08:01:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:23.279 08:01:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:23.279 08:01:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:23.279 08:01:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:23.537 08:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 83c7bbad-7f30-487c-b287-4062388c79d8 == \8\3\c\7\b\b\a\d\-\7\f\3\0\-\4\8\7\c\-\b\2\8\7\-\4\0\6\2\3\8\8\c\7\9\d\8 ]] 00:14:23.537 08:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:23.537 08:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:23.537 08:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:23.794 08:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 069c6388-3ef3-48c7-a104-3e8358c1f480 == \0\6\9\c\6\3\8\8\-\3\e\f\3\-\4\8\c\7\-\a\1\0\4\-\3\e\8\3\5\8\c\1\f\4\8\0 ]] 00:14:23.794 08:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1913874 00:14:23.794 08:01:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1913874 ']' 00:14:23.794 08:01:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1913874 00:14:23.794 08:01:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:23.794 08:01:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:23.794 08:01:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1913874 00:14:23.794 08:01:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:23.794 08:01:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:23.794 08:01:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1913874' 00:14:23.794 killing process with pid 1913874 00:14:23.794 08:01:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1913874 00:14:23.794 08:01:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1913874 00:14:24.365 08:01:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:24.622 08:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:24.622 08:01:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:24.623 08:01:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:24.623 08:01:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:24.623 08:01:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:24.623 08:01:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:24.623 08:01:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:24.623 08:01:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:24.623 rmmod nvme_tcp 00:14:24.623 rmmod nvme_fabrics 00:14:24.623 rmmod nvme_keyring 00:14:24.623 08:01:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:24.623 08:01:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:24.623 08:01:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:24.623 08:01:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1912256 ']' 00:14:24.623 08:01:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1912256 00:14:24.623 08:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1912256 ']' 00:14:24.623 08:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1912256 00:14:24.623 08:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:24.623 08:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:24.623 08:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1912256 00:14:24.623 08:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:24.623 08:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:24.623 08:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1912256' 00:14:24.623 killing process with pid 1912256 00:14:24.623 08:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1912256 00:14:24.623 08:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1912256 00:14:24.879 08:01:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:24.879 08:01:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:24.879 08:01:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:24.879 08:01:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:24.879 08:01:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:24.879 08:01:16 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:24.879 08:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:24.879 08:01:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.409 08:01:18 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:27.409 00:14:27.409 real 0m21.110s 00:14:27.409 user 0m27.499s 00:14:27.409 sys 0m4.131s 00:14:27.409 08:01:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:27.409 08:01:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:27.409 ************************************ 00:14:27.409 END TEST nvmf_ns_masking 00:14:27.409 ************************************ 00:14:27.409 08:01:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:27.409 08:01:18 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:27.409 08:01:18 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:27.409 08:01:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:27.409 08:01:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:27.409 08:01:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:27.409 ************************************ 00:14:27.409 START TEST nvmf_nvme_cli 00:14:27.409 ************************************ 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:27.409 * Looking for test storage... 00:14:27.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:27.409 08:01:18 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:29.308 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:29.308 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:29.308 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:29.308 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:29.309 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:29.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:29.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:14:29.309 00:14:29.309 --- 10.0.0.2 ping statistics --- 00:14:29.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.309 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:29.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:29.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:14:29.309 00:14:29.309 --- 10.0.0.1 ping statistics --- 00:14:29.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:29.309 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1916369 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1916369 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1916369 ']' 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:29.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.309 08:01:20 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.309 [2024-07-13 08:01:20.991068] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:29.309 [2024-07-13 08:01:20.991142] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:29.309 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.566 [2024-07-13 08:01:21.061627] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:29.566 [2024-07-13 08:01:21.157758] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:29.566 [2024-07-13 08:01:21.157819] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:29.566 [2024-07-13 08:01:21.157843] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:29.566 [2024-07-13 08:01:21.157856] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:29.566 [2024-07-13 08:01:21.157878] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:29.566 [2024-07-13 08:01:21.161890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:29.566 [2024-07-13 08:01:21.161926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:29.566 [2024-07-13 08:01:21.161982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:29.566 [2024-07-13 08:01:21.161986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.566 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:29.566 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:14:29.566 08:01:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:29.566 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:29.566 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.824 [2024-07-13 08:01:21.306542] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.824 Malloc0 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.824 Malloc1 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.824 [2024-07-13 08:01:21.392295] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:29.824 08:01:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:14:29.824 00:14:29.824 Discovery Log Number of Records 2, Generation counter 2 00:14:29.824 =====Discovery Log Entry 0====== 00:14:29.824 trtype: tcp 00:14:29.824 adrfam: ipv4 00:14:29.824 subtype: current discovery subsystem 00:14:29.824 treq: not required 00:14:29.824 portid: 0 00:14:29.824 trsvcid: 4420 00:14:29.824 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:29.824 traddr: 10.0.0.2 00:14:29.824 eflags: explicit discovery connections, duplicate discovery information 00:14:29.824 sectype: none 00:14:29.824 =====Discovery Log Entry 1====== 00:14:29.824 trtype: tcp 00:14:29.824 adrfam: ipv4 00:14:29.824 subtype: nvme subsystem 00:14:29.824 treq: not required 00:14:29.824 portid: 0 00:14:29.825 trsvcid: 4420 00:14:29.825 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:29.825 traddr: 10.0.0.2 00:14:29.825 eflags: none 00:14:29.825 sectype: none 00:14:29.825 08:01:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:29.825 08:01:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:29.825 08:01:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:29.825 08:01:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:29.825 08:01:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:29.825 08:01:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:29.825 08:01:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:29.825 08:01:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:29.825 08:01:21 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:29.825 08:01:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:29.825 08:01:21 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:30.756 08:01:22 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:30.756 08:01:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:30.756 08:01:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:30.756 08:01:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:30.756 08:01:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:30.756 08:01:22 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:32.661 /dev/nvme0n1 ]] 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:32.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:32.661 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:32.661 rmmod nvme_tcp 00:14:32.661 rmmod nvme_fabrics 00:14:32.661 rmmod nvme_keyring 00:14:32.918 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:32.918 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:32.918 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:32.918 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1916369 ']' 00:14:32.918 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1916369 00:14:32.918 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1916369 ']' 00:14:32.918 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1916369 00:14:32.918 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:14:32.918 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:32.918 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1916369 00:14:32.918 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:32.918 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:32.918 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1916369' 00:14:32.918 killing process with pid 1916369 00:14:32.918 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1916369 00:14:32.918 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1916369 00:14:33.176 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:33.176 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:33.176 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:33.176 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:33.176 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:33.176 08:01:24 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.176 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.176 08:01:24 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.073 08:01:26 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:35.073 00:14:35.073 real 0m8.132s 00:14:35.073 user 0m14.710s 00:14:35.073 sys 0m2.222s 00:14:35.073 08:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:35.073 08:01:26 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:35.073 ************************************ 00:14:35.073 END TEST nvmf_nvme_cli 00:14:35.073 ************************************ 00:14:35.073 08:01:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:35.073 08:01:26 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:14:35.073 08:01:26 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:35.073 08:01:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:35.073 08:01:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.073 08:01:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:35.331 ************************************ 00:14:35.331 START TEST nvmf_vfio_user 00:14:35.331 ************************************ 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:35.331 * Looking for test storage... 00:14:35.331 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1917161 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1917161' 00:14:35.331 Process pid: 1917161 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1917161 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1917161 ']' 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:35.331 08:01:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:35.331 [2024-07-13 08:01:26.935582] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:35.331 [2024-07-13 08:01:26.935671] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.331 EAL: No free 2048 kB hugepages reported on node 1 00:14:35.331 [2024-07-13 08:01:27.000683] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:35.589 [2024-07-13 08:01:27.093793] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.589 [2024-07-13 08:01:27.093850] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.589 [2024-07-13 08:01:27.093885] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.589 [2024-07-13 08:01:27.093901] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.589 [2024-07-13 08:01:27.093913] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.589 [2024-07-13 08:01:27.097890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.589 [2024-07-13 08:01:27.097937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:35.589 [2024-07-13 08:01:27.098015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:35.589 [2024-07-13 08:01:27.098019] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.589 08:01:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.589 08:01:27 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:14:35.589 08:01:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:36.521 08:01:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:36.785 08:01:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:36.785 08:01:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:36.785 08:01:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:36.785 08:01:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:36.785 08:01:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:37.042 Malloc1 00:14:37.042 08:01:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:37.301 08:01:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:37.557 08:01:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:37.814 08:01:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:37.814 08:01:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:37.814 08:01:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:38.070 Malloc2 00:14:38.070 08:01:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:38.327 08:01:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:38.584 08:01:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:38.841 08:01:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:38.841 08:01:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:38.841 08:01:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:38.841 08:01:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:38.841 08:01:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:38.841 08:01:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:38.841 [2024-07-13 08:01:30.523724] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:14:38.841 [2024-07-13 08:01:30.523764] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1917585 ] 00:14:38.841 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.841 [2024-07-13 08:01:30.558222] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:38.841 [2024-07-13 08:01:30.564336] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:38.841 [2024-07-13 08:01:30.564364] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f350e7a4000 00:14:38.841 [2024-07-13 08:01:30.565321] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:38.841 [2024-07-13 08:01:30.566314] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:38.841 [2024-07-13 08:01:30.567319] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:38.841 [2024-07-13 08:01:30.568326] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:38.841 [2024-07-13 08:01:30.569330] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:38.841 [2024-07-13 08:01:30.570333] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:38.841 [2024-07-13 08:01:30.571340] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:38.841 [2024-07-13 08:01:30.572354] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:38.841 [2024-07-13 08:01:30.573345] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:38.841 [2024-07-13 08:01:30.573369] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f350d558000 00:14:39.100 [2024-07-13 08:01:30.574557] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:39.100 [2024-07-13 08:01:30.588648] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:39.100 [2024-07-13 08:01:30.588688] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:39.100 [2024-07-13 08:01:30.597489] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:39.100 [2024-07-13 08:01:30.597546] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:39.100 [2024-07-13 08:01:30.597654] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:39.100 [2024-07-13 08:01:30.597687] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:39.100 [2024-07-13 08:01:30.597698] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:39.100 [2024-07-13 08:01:30.598480] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:39.100 [2024-07-13 08:01:30.598510] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:39.100 [2024-07-13 08:01:30.598522] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:39.100 [2024-07-13 08:01:30.599483] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:39.100 [2024-07-13 08:01:30.599501] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:39.100 [2024-07-13 08:01:30.599514] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:39.100 [2024-07-13 08:01:30.600491] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:39.100 [2024-07-13 08:01:30.600509] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:39.100 [2024-07-13 08:01:30.601492] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:39.100 [2024-07-13 08:01:30.601512] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:39.100 [2024-07-13 08:01:30.601521] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:39.100 [2024-07-13 08:01:30.601532] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:39.100 [2024-07-13 08:01:30.601646] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:39.100 [2024-07-13 08:01:30.601655] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:39.100 [2024-07-13 08:01:30.601664] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:39.100 [2024-07-13 08:01:30.602499] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:39.100 [2024-07-13 08:01:30.603502] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:39.100 [2024-07-13 08:01:30.604507] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:39.100 [2024-07-13 08:01:30.605500] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:39.100 [2024-07-13 08:01:30.605591] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:39.100 [2024-07-13 08:01:30.606519] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:39.100 [2024-07-13 08:01:30.606537] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:39.100 [2024-07-13 08:01:30.606546] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:39.100 [2024-07-13 08:01:30.606570] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:39.100 [2024-07-13 08:01:30.606583] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:39.100 [2024-07-13 08:01:30.606610] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:39.100 [2024-07-13 08:01:30.606620] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:39.100 [2024-07-13 08:01:30.606641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:39.100 [2024-07-13 08:01:30.606686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:39.100 [2024-07-13 08:01:30.606704] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:39.100 [2024-07-13 08:01:30.606716] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:39.100 [2024-07-13 08:01:30.606724] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:39.100 [2024-07-13 08:01:30.606731] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:39.100 [2024-07-13 08:01:30.606739] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:39.100 [2024-07-13 08:01:30.606747] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:39.101 [2024-07-13 08:01:30.606755] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:39.101 [2024-07-13 08:01:30.606768] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:39.101 [2024-07-13 08:01:30.606789] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:39.101 [2024-07-13 08:01:30.606804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:39.101 [2024-07-13 08:01:30.606826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.101 [2024-07-13 08:01:30.606839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.101 [2024-07-13 08:01:30.606873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.101 [2024-07-13 08:01:30.606886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:39.101 [2024-07-13 08:01:30.606895] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:39.101 [2024-07-13 08:01:30.606916] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:39.101 [2024-07-13 08:01:30.606930] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:39.101 [2024-07-13 08:01:30.606942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:39.101 [2024-07-13 08:01:30.606954] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:39.101 [2024-07-13 08:01:30.606962] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:39.101 [2024-07-13 08:01:30.606973] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:39.101 [2024-07-13 08:01:30.606984] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:39.101 [2024-07-13 08:01:30.606997] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:39.101 [2024-07-13 08:01:30.607011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:39.101 [2024-07-13 08:01:30.607073] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:39.101 [2024-07-13 08:01:30.607088] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:39.101 [2024-07-13 08:01:30.607101] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:39.101 [2024-07-13 08:01:30.607109] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:39.101 [2024-07-13 08:01:30.607119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:39.101 [2024-07-13 08:01:30.607133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:39.101 [2024-07-13 08:01:30.607177] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:39.101 [2024-07-13 08:01:30.607199] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:39.101 [2024-07-13 08:01:30.607215] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:39.101 [2024-07-13 08:01:30.607230] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:39.101 [2024-07-13 08:01:30.607238] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:39.101 [2024-07-13 08:01:30.607247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:39.101 [2024-07-13 08:01:30.607272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:39.101 [2024-07-13 08:01:30.607294] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:39.101 [2024-07-13 08:01:30.607308] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:39.101 [2024-07-13 08:01:30.607320] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:39.101 [2024-07-13 08:01:30.607327] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:39.101 [2024-07-13 08:01:30.607337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:39.101 [2024-07-13 08:01:30.607350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:39.101 [2024-07-13 08:01:30.607364] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:39.101 [2024-07-13 08:01:30.607375] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:39.101 [2024-07-13 08:01:30.607388] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:39.101 [2024-07-13 08:01:30.607399] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:39.101 [2024-07-13 08:01:30.607407] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:39.101 [2024-07-13 08:01:30.607416] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:39.101 [2024-07-13 08:01:30.607425] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:39.101 [2024-07-13 08:01:30.607432] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:39.101 [2024-07-13 08:01:30.607441] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:39.101 [2024-07-13 08:01:30.607468] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:39.101 [2024-07-13 08:01:30.607486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:39.101 [2024-07-13 08:01:30.607504] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:39.101 [2024-07-13 08:01:30.607515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:39.101 [2024-07-13 08:01:30.607531] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:39.101 [2024-07-13 08:01:30.607548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:39.101 [2024-07-13 08:01:30.607563] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:39.101 [2024-07-13 08:01:30.607578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:39.101 [2024-07-13 08:01:30.607600] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:39.101 [2024-07-13 08:01:30.607609] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:39.101 [2024-07-13 08:01:30.607616] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:39.101 [2024-07-13 08:01:30.607621] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:39.101 [2024-07-13 08:01:30.607630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:39.101 [2024-07-13 08:01:30.607641] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:39.101 [2024-07-13 08:01:30.607649] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:39.101 [2024-07-13 08:01:30.607657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:39.101 [2024-07-13 08:01:30.607668] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:39.101 [2024-07-13 08:01:30.607675] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:39.101 [2024-07-13 08:01:30.607683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:39.101 [2024-07-13 08:01:30.607695] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:39.101 [2024-07-13 08:01:30.607702] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:39.101 [2024-07-13 08:01:30.607711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:39.101 [2024-07-13 08:01:30.607722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:39.101 [2024-07-13 08:01:30.607740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:39.101 [2024-07-13 08:01:30.607757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:39.101 [2024-07-13 08:01:30.607768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:39.101 ===================================================== 00:14:39.101 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:39.101 ===================================================== 00:14:39.101 Controller Capabilities/Features 00:14:39.101 ================================ 00:14:39.101 Vendor ID: 4e58 00:14:39.101 Subsystem Vendor ID: 4e58 00:14:39.101 Serial Number: SPDK1 00:14:39.101 Model Number: SPDK bdev Controller 00:14:39.101 Firmware Version: 24.09 00:14:39.101 Recommended Arb Burst: 6 00:14:39.101 IEEE OUI Identifier: 8d 6b 50 00:14:39.101 Multi-path I/O 00:14:39.101 May have multiple subsystem ports: Yes 00:14:39.101 May have multiple controllers: Yes 00:14:39.101 Associated with SR-IOV VF: No 00:14:39.101 Max Data Transfer Size: 131072 00:14:39.101 Max Number of Namespaces: 32 00:14:39.101 Max Number of I/O Queues: 127 00:14:39.101 NVMe Specification Version (VS): 1.3 00:14:39.101 NVMe Specification Version (Identify): 1.3 00:14:39.101 Maximum Queue Entries: 256 00:14:39.101 Contiguous Queues Required: Yes 00:14:39.101 Arbitration Mechanisms Supported 00:14:39.101 Weighted Round Robin: Not Supported 00:14:39.101 Vendor Specific: Not Supported 00:14:39.101 Reset Timeout: 15000 ms 00:14:39.101 Doorbell Stride: 4 bytes 00:14:39.101 NVM Subsystem Reset: Not Supported 00:14:39.101 Command Sets Supported 00:14:39.101 NVM Command Set: Supported 00:14:39.101 Boot Partition: Not Supported 00:14:39.101 Memory Page Size Minimum: 4096 bytes 00:14:39.102 Memory Page Size Maximum: 4096 bytes 00:14:39.102 Persistent Memory Region: Not Supported 00:14:39.102 Optional Asynchronous Events Supported 00:14:39.102 Namespace Attribute Notices: Supported 00:14:39.102 Firmware Activation Notices: Not Supported 00:14:39.102 ANA Change Notices: Not Supported 00:14:39.102 PLE Aggregate Log Change Notices: Not Supported 00:14:39.102 LBA Status Info Alert Notices: Not Supported 00:14:39.102 EGE Aggregate Log Change Notices: Not Supported 00:14:39.102 Normal NVM Subsystem Shutdown event: Not Supported 00:14:39.102 Zone Descriptor Change Notices: Not Supported 00:14:39.102 Discovery Log Change Notices: Not Supported 00:14:39.102 Controller Attributes 00:14:39.102 128-bit Host Identifier: Supported 00:14:39.102 Non-Operational Permissive Mode: Not Supported 00:14:39.102 NVM Sets: Not Supported 00:14:39.102 Read Recovery Levels: Not Supported 00:14:39.102 Endurance Groups: Not Supported 00:14:39.102 Predictable Latency Mode: Not Supported 00:14:39.102 Traffic Based Keep ALive: Not Supported 00:14:39.102 Namespace Granularity: Not Supported 00:14:39.102 SQ Associations: Not Supported 00:14:39.102 UUID List: Not Supported 00:14:39.102 Multi-Domain Subsystem: Not Supported 00:14:39.102 Fixed Capacity Management: Not Supported 00:14:39.102 Variable Capacity Management: Not Supported 00:14:39.102 Delete Endurance Group: Not Supported 00:14:39.102 Delete NVM Set: Not Supported 00:14:39.102 Extended LBA Formats Supported: Not Supported 00:14:39.102 Flexible Data Placement Supported: Not Supported 00:14:39.102 00:14:39.102 Controller Memory Buffer Support 00:14:39.102 ================================ 00:14:39.102 Supported: No 00:14:39.102 00:14:39.102 Persistent Memory Region Support 00:14:39.102 ================================ 00:14:39.102 Supported: No 00:14:39.102 00:14:39.102 Admin Command Set Attributes 00:14:39.102 ============================ 00:14:39.102 Security Send/Receive: Not Supported 00:14:39.102 Format NVM: Not Supported 00:14:39.102 Firmware Activate/Download: Not Supported 00:14:39.102 Namespace Management: Not Supported 00:14:39.102 Device Self-Test: Not Supported 00:14:39.102 Directives: Not Supported 00:14:39.102 NVMe-MI: Not Supported 00:14:39.102 Virtualization Management: Not Supported 00:14:39.102 Doorbell Buffer Config: Not Supported 00:14:39.102 Get LBA Status Capability: Not Supported 00:14:39.102 Command & Feature Lockdown Capability: Not Supported 00:14:39.102 Abort Command Limit: 4 00:14:39.102 Async Event Request Limit: 4 00:14:39.102 Number of Firmware Slots: N/A 00:14:39.102 Firmware Slot 1 Read-Only: N/A 00:14:39.102 Firmware Activation Without Reset: N/A 00:14:39.102 Multiple Update Detection Support: N/A 00:14:39.102 Firmware Update Granularity: No Information Provided 00:14:39.102 Per-Namespace SMART Log: No 00:14:39.102 Asymmetric Namespace Access Log Page: Not Supported 00:14:39.102 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:39.102 Command Effects Log Page: Supported 00:14:39.102 Get Log Page Extended Data: Supported 00:14:39.102 Telemetry Log Pages: Not Supported 00:14:39.102 Persistent Event Log Pages: Not Supported 00:14:39.102 Supported Log Pages Log Page: May Support 00:14:39.102 Commands Supported & Effects Log Page: Not Supported 00:14:39.102 Feature Identifiers & Effects Log Page:May Support 00:14:39.102 NVMe-MI Commands & Effects Log Page: May Support 00:14:39.102 Data Area 4 for Telemetry Log: Not Supported 00:14:39.102 Error Log Page Entries Supported: 128 00:14:39.102 Keep Alive: Supported 00:14:39.102 Keep Alive Granularity: 10000 ms 00:14:39.102 00:14:39.102 NVM Command Set Attributes 00:14:39.102 ========================== 00:14:39.102 Submission Queue Entry Size 00:14:39.102 Max: 64 00:14:39.102 Min: 64 00:14:39.102 Completion Queue Entry Size 00:14:39.102 Max: 16 00:14:39.102 Min: 16 00:14:39.102 Number of Namespaces: 32 00:14:39.102 Compare Command: Supported 00:14:39.102 Write Uncorrectable Command: Not Supported 00:14:39.102 Dataset Management Command: Supported 00:14:39.102 Write Zeroes Command: Supported 00:14:39.102 Set Features Save Field: Not Supported 00:14:39.102 Reservations: Not Supported 00:14:39.102 Timestamp: Not Supported 00:14:39.102 Copy: Supported 00:14:39.102 Volatile Write Cache: Present 00:14:39.102 Atomic Write Unit (Normal): 1 00:14:39.102 Atomic Write Unit (PFail): 1 00:14:39.102 Atomic Compare & Write Unit: 1 00:14:39.102 Fused Compare & Write: Supported 00:14:39.102 Scatter-Gather List 00:14:39.102 SGL Command Set: Supported (Dword aligned) 00:14:39.102 SGL Keyed: Not Supported 00:14:39.102 SGL Bit Bucket Descriptor: Not Supported 00:14:39.102 SGL Metadata Pointer: Not Supported 00:14:39.102 Oversized SGL: Not Supported 00:14:39.102 SGL Metadata Address: Not Supported 00:14:39.102 SGL Offset: Not Supported 00:14:39.102 Transport SGL Data Block: Not Supported 00:14:39.102 Replay Protected Memory Block: Not Supported 00:14:39.102 00:14:39.102 Firmware Slot Information 00:14:39.102 ========================= 00:14:39.102 Active slot: 1 00:14:39.102 Slot 1 Firmware Revision: 24.09 00:14:39.102 00:14:39.102 00:14:39.102 Commands Supported and Effects 00:14:39.102 ============================== 00:14:39.102 Admin Commands 00:14:39.102 -------------- 00:14:39.102 Get Log Page (02h): Supported 00:14:39.102 Identify (06h): Supported 00:14:39.102 Abort (08h): Supported 00:14:39.102 Set Features (09h): Supported 00:14:39.102 Get Features (0Ah): Supported 00:14:39.102 Asynchronous Event Request (0Ch): Supported 00:14:39.102 Keep Alive (18h): Supported 00:14:39.102 I/O Commands 00:14:39.102 ------------ 00:14:39.102 Flush (00h): Supported LBA-Change 00:14:39.102 Write (01h): Supported LBA-Change 00:14:39.102 Read (02h): Supported 00:14:39.102 Compare (05h): Supported 00:14:39.102 Write Zeroes (08h): Supported LBA-Change 00:14:39.102 Dataset Management (09h): Supported LBA-Change 00:14:39.102 Copy (19h): Supported LBA-Change 00:14:39.102 00:14:39.102 Error Log 00:14:39.102 ========= 00:14:39.102 00:14:39.102 Arbitration 00:14:39.102 =========== 00:14:39.102 Arbitration Burst: 1 00:14:39.102 00:14:39.102 Power Management 00:14:39.102 ================ 00:14:39.102 Number of Power States: 1 00:14:39.102 Current Power State: Power State #0 00:14:39.102 Power State #0: 00:14:39.102 Max Power: 0.00 W 00:14:39.102 Non-Operational State: Operational 00:14:39.102 Entry Latency: Not Reported 00:14:39.102 Exit Latency: Not Reported 00:14:39.102 Relative Read Throughput: 0 00:14:39.102 Relative Read Latency: 0 00:14:39.102 Relative Write Throughput: 0 00:14:39.102 Relative Write Latency: 0 00:14:39.102 Idle Power: Not Reported 00:14:39.102 Active Power: Not Reported 00:14:39.102 Non-Operational Permissive Mode: Not Supported 00:14:39.102 00:14:39.102 Health Information 00:14:39.102 ================== 00:14:39.102 Critical Warnings: 00:14:39.102 Available Spare Space: OK 00:14:39.102 Temperature: OK 00:14:39.102 Device Reliability: OK 00:14:39.102 Read Only: No 00:14:39.102 Volatile Memory Backup: OK 00:14:39.102 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:39.102 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:39.102 Available Spare: 0% 00:14:39.102 Available Sp[2024-07-13 08:01:30.607929] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:39.102 [2024-07-13 08:01:30.607946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:39.102 [2024-07-13 08:01:30.607997] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:39.102 [2024-07-13 08:01:30.608017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.102 [2024-07-13 08:01:30.608029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.102 [2024-07-13 08:01:30.608039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.102 [2024-07-13 08:01:30.608049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:39.102 [2024-07-13 08:01:30.608530] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:39.102 [2024-07-13 08:01:30.608554] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:39.102 [2024-07-13 08:01:30.609528] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:39.102 [2024-07-13 08:01:30.609604] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:39.102 [2024-07-13 08:01:30.609619] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:39.102 [2024-07-13 08:01:30.610542] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:39.102 [2024-07-13 08:01:30.610566] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:39.102 [2024-07-13 08:01:30.610620] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:39.102 [2024-07-13 08:01:30.612583] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:39.102 are Threshold: 0% 00:14:39.102 Life Percentage Used: 0% 00:14:39.102 Data Units Read: 0 00:14:39.102 Data Units Written: 0 00:14:39.103 Host Read Commands: 0 00:14:39.103 Host Write Commands: 0 00:14:39.103 Controller Busy Time: 0 minutes 00:14:39.103 Power Cycles: 0 00:14:39.103 Power On Hours: 0 hours 00:14:39.103 Unsafe Shutdowns: 0 00:14:39.103 Unrecoverable Media Errors: 0 00:14:39.103 Lifetime Error Log Entries: 0 00:14:39.103 Warning Temperature Time: 0 minutes 00:14:39.103 Critical Temperature Time: 0 minutes 00:14:39.103 00:14:39.103 Number of Queues 00:14:39.103 ================ 00:14:39.103 Number of I/O Submission Queues: 127 00:14:39.103 Number of I/O Completion Queues: 127 00:14:39.103 00:14:39.103 Active Namespaces 00:14:39.103 ================= 00:14:39.103 Namespace ID:1 00:14:39.103 Error Recovery Timeout: Unlimited 00:14:39.103 Command Set Identifier: NVM (00h) 00:14:39.103 Deallocate: Supported 00:14:39.103 Deallocated/Unwritten Error: Not Supported 00:14:39.103 Deallocated Read Value: Unknown 00:14:39.103 Deallocate in Write Zeroes: Not Supported 00:14:39.103 Deallocated Guard Field: 0xFFFF 00:14:39.103 Flush: Supported 00:14:39.103 Reservation: Supported 00:14:39.103 Namespace Sharing Capabilities: Multiple Controllers 00:14:39.103 Size (in LBAs): 131072 (0GiB) 00:14:39.103 Capacity (in LBAs): 131072 (0GiB) 00:14:39.103 Utilization (in LBAs): 131072 (0GiB) 00:14:39.103 NGUID: DD421394E8D94743ADFE86FB1BB410A7 00:14:39.103 UUID: dd421394-e8d9-4743-adfe-86fb1bb410a7 00:14:39.103 Thin Provisioning: Not Supported 00:14:39.103 Per-NS Atomic Units: Yes 00:14:39.103 Atomic Boundary Size (Normal): 0 00:14:39.103 Atomic Boundary Size (PFail): 0 00:14:39.103 Atomic Boundary Offset: 0 00:14:39.103 Maximum Single Source Range Length: 65535 00:14:39.103 Maximum Copy Length: 65535 00:14:39.103 Maximum Source Range Count: 1 00:14:39.103 NGUID/EUI64 Never Reused: No 00:14:39.103 Namespace Write Protected: No 00:14:39.103 Number of LBA Formats: 1 00:14:39.103 Current LBA Format: LBA Format #00 00:14:39.103 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:39.103 00:14:39.103 08:01:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:39.103 EAL: No free 2048 kB hugepages reported on node 1 00:14:39.360 [2024-07-13 08:01:30.846713] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:44.613 Initializing NVMe Controllers 00:14:44.614 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:44.614 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:44.614 Initialization complete. Launching workers. 00:14:44.614 ======================================================== 00:14:44.614 Latency(us) 00:14:44.614 Device Information : IOPS MiB/s Average min max 00:14:44.614 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33900.72 132.42 3775.01 1160.00 9309.74 00:14:44.614 ======================================================== 00:14:44.614 Total : 33900.72 132.42 3775.01 1160.00 9309.74 00:14:44.614 00:14:44.614 [2024-07-13 08:01:35.866324] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:44.614 08:01:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:44.614 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.614 [2024-07-13 08:01:36.113516] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:49.866 Initializing NVMe Controllers 00:14:49.866 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:49.866 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:49.866 Initialization complete. Launching workers. 00:14:49.866 ======================================================== 00:14:49.866 Latency(us) 00:14:49.866 Device Information : IOPS MiB/s Average min max 00:14:49.866 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16050.90 62.70 7979.86 6956.42 11983.08 00:14:49.866 ======================================================== 00:14:49.866 Total : 16050.90 62.70 7979.86 6956.42 11983.08 00:14:49.866 00:14:49.866 [2024-07-13 08:01:41.156313] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:49.866 08:01:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:49.866 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.866 [2024-07-13 08:01:41.358394] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:55.132 [2024-07-13 08:01:46.433202] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:55.132 Initializing NVMe Controllers 00:14:55.132 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:55.133 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:55.133 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:55.133 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:55.133 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:55.133 Initialization complete. Launching workers. 00:14:55.133 Starting thread on core 2 00:14:55.133 Starting thread on core 3 00:14:55.133 Starting thread on core 1 00:14:55.133 08:01:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:55.133 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.133 [2024-07-13 08:01:46.745307] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:59.357 [2024-07-13 08:01:50.409178] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:59.357 Initializing NVMe Controllers 00:14:59.357 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:59.357 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:59.357 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:59.357 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:59.357 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:59.357 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:59.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:59.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:59.357 Initialization complete. Launching workers. 00:14:59.357 Starting thread on core 1 with urgent priority queue 00:14:59.357 Starting thread on core 2 with urgent priority queue 00:14:59.357 Starting thread on core 3 with urgent priority queue 00:14:59.357 Starting thread on core 0 with urgent priority queue 00:14:59.357 SPDK bdev Controller (SPDK1 ) core 0: 2875.67 IO/s 34.77 secs/100000 ios 00:14:59.357 SPDK bdev Controller (SPDK1 ) core 1: 3364.33 IO/s 29.72 secs/100000 ios 00:14:59.357 SPDK bdev Controller (SPDK1 ) core 2: 3126.33 IO/s 31.99 secs/100000 ios 00:14:59.357 SPDK bdev Controller (SPDK1 ) core 3: 3368.33 IO/s 29.69 secs/100000 ios 00:14:59.357 ======================================================== 00:14:59.357 00:14:59.357 08:01:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:59.357 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.357 [2024-07-13 08:01:50.702360] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:59.357 Initializing NVMe Controllers 00:14:59.357 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:59.357 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:59.357 Namespace ID: 1 size: 0GB 00:14:59.357 Initialization complete. 00:14:59.357 INFO: using host memory buffer for IO 00:14:59.357 Hello world! 00:14:59.357 [2024-07-13 08:01:50.738974] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:59.357 08:01:50 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:59.357 EAL: No free 2048 kB hugepages reported on node 1 00:14:59.357 [2024-07-13 08:01:51.026488] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:00.727 Initializing NVMe Controllers 00:15:00.727 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:00.727 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:00.727 Initialization complete. Launching workers. 00:15:00.727 submit (in ns) avg, min, max = 9115.1, 3546.7, 4019172.2 00:15:00.727 complete (in ns) avg, min, max = 23543.8, 2072.2, 4019540.0 00:15:00.727 00:15:00.727 Submit histogram 00:15:00.727 ================ 00:15:00.727 Range in us Cumulative Count 00:15:00.727 3.532 - 3.556: 0.0601% ( 8) 00:15:00.727 3.556 - 3.579: 1.0292% ( 129) 00:15:00.727 3.579 - 3.603: 2.8097% ( 237) 00:15:00.727 3.603 - 3.627: 7.6703% ( 647) 00:15:00.727 3.627 - 3.650: 15.3332% ( 1020) 00:15:00.727 3.650 - 3.674: 25.4226% ( 1343) 00:15:00.727 3.674 - 3.698: 34.9561% ( 1269) 00:15:00.727 3.698 - 3.721: 43.1748% ( 1094) 00:15:00.727 3.721 - 3.745: 49.1473% ( 795) 00:15:00.727 3.745 - 3.769: 54.2859% ( 684) 00:15:00.727 3.769 - 3.793: 58.3728% ( 544) 00:15:00.727 3.793 - 3.816: 61.9638% ( 478) 00:15:00.727 3.816 - 3.840: 65.1491% ( 424) 00:15:00.727 3.840 - 3.864: 68.4246% ( 436) 00:15:00.727 3.864 - 3.887: 71.9781% ( 473) 00:15:00.727 3.887 - 3.911: 76.0574% ( 543) 00:15:00.727 3.911 - 3.935: 80.4823% ( 589) 00:15:00.727 3.935 - 3.959: 83.5174% ( 404) 00:15:00.727 3.959 - 3.982: 85.7111% ( 292) 00:15:00.727 3.982 - 4.006: 87.5291% ( 242) 00:15:00.727 4.006 - 4.030: 88.9415% ( 188) 00:15:00.727 4.030 - 4.053: 90.1360% ( 159) 00:15:00.727 4.053 - 4.077: 91.0901% ( 127) 00:15:00.727 4.077 - 4.101: 91.8939% ( 107) 00:15:00.727 4.101 - 4.124: 92.6302% ( 98) 00:15:00.727 4.124 - 4.148: 93.4190% ( 105) 00:15:00.727 4.148 - 4.172: 93.9298% ( 68) 00:15:00.727 4.172 - 4.196: 94.3656% ( 58) 00:15:00.727 4.196 - 4.219: 94.7712% ( 54) 00:15:00.727 4.219 - 4.243: 95.1243% ( 47) 00:15:00.727 4.243 - 4.267: 95.3347% ( 28) 00:15:00.727 4.267 - 4.290: 95.5300% ( 26) 00:15:00.727 4.290 - 4.314: 95.6427% ( 15) 00:15:00.727 4.314 - 4.338: 95.7028% ( 8) 00:15:00.727 4.338 - 4.361: 95.8906% ( 25) 00:15:00.727 4.361 - 4.385: 95.9733% ( 11) 00:15:00.727 4.385 - 4.409: 96.0935% ( 16) 00:15:00.727 4.409 - 4.433: 96.1986% ( 14) 00:15:00.727 4.433 - 4.456: 96.3414% ( 19) 00:15:00.727 4.456 - 4.480: 96.4240% ( 11) 00:15:00.727 4.480 - 4.504: 96.4541% ( 4) 00:15:00.727 4.504 - 4.527: 96.5367% ( 11) 00:15:00.727 4.527 - 4.551: 96.5667% ( 4) 00:15:00.727 4.551 - 4.575: 96.6118% ( 6) 00:15:00.727 4.575 - 4.599: 96.6193% ( 1) 00:15:00.727 4.599 - 4.622: 96.6569% ( 5) 00:15:00.727 4.622 - 4.646: 96.6945% ( 5) 00:15:00.727 4.646 - 4.670: 96.7395% ( 6) 00:15:00.727 4.670 - 4.693: 96.7771% ( 5) 00:15:00.727 4.693 - 4.717: 96.8522% ( 10) 00:15:00.727 4.717 - 4.741: 96.9123% ( 8) 00:15:00.727 4.741 - 4.764: 96.9724% ( 8) 00:15:00.727 4.764 - 4.788: 97.0250% ( 7) 00:15:00.727 4.788 - 4.812: 97.0851% ( 8) 00:15:00.727 4.812 - 4.836: 97.1302% ( 6) 00:15:00.727 4.836 - 4.859: 97.1753% ( 6) 00:15:00.727 4.859 - 4.883: 97.2279% ( 7) 00:15:00.727 4.883 - 4.907: 97.2654% ( 5) 00:15:00.727 4.907 - 4.930: 97.3030% ( 5) 00:15:00.727 4.930 - 4.954: 97.3255% ( 3) 00:15:00.727 4.954 - 4.978: 97.3481% ( 3) 00:15:00.727 4.978 - 5.001: 97.3706% ( 3) 00:15:00.727 5.001 - 5.025: 97.4382% ( 9) 00:15:00.727 5.025 - 5.049: 97.4833% ( 6) 00:15:00.727 5.049 - 5.073: 97.5359% ( 7) 00:15:00.727 5.073 - 5.096: 97.5509% ( 2) 00:15:00.727 5.096 - 5.120: 97.5885% ( 5) 00:15:00.727 5.120 - 5.144: 97.6185% ( 4) 00:15:00.727 5.144 - 5.167: 97.6410% ( 3) 00:15:00.727 5.167 - 5.191: 97.6636% ( 3) 00:15:00.727 5.215 - 5.239: 97.6786% ( 2) 00:15:00.727 5.239 - 5.262: 97.7011% ( 3) 00:15:00.727 5.262 - 5.286: 97.7237% ( 3) 00:15:00.727 5.286 - 5.310: 97.7387% ( 2) 00:15:00.727 5.310 - 5.333: 97.7613% ( 3) 00:15:00.727 5.333 - 5.357: 97.7763% ( 2) 00:15:00.728 5.357 - 5.381: 97.8063% ( 4) 00:15:00.728 5.404 - 5.428: 97.8214% ( 2) 00:15:00.728 5.428 - 5.452: 97.8289% ( 1) 00:15:00.728 5.452 - 5.476: 97.8439% ( 2) 00:15:00.728 5.476 - 5.499: 97.8589% ( 2) 00:15:00.728 5.499 - 5.523: 97.8664% ( 1) 00:15:00.728 5.523 - 5.547: 97.8739% ( 1) 00:15:00.728 5.547 - 5.570: 97.8815% ( 1) 00:15:00.728 5.570 - 5.594: 97.8890% ( 1) 00:15:00.728 5.594 - 5.618: 97.9040% ( 2) 00:15:00.728 5.641 - 5.665: 97.9115% ( 1) 00:15:00.728 5.665 - 5.689: 97.9265% ( 2) 00:15:00.728 5.689 - 5.713: 97.9491% ( 3) 00:15:00.728 5.713 - 5.736: 97.9566% ( 1) 00:15:00.728 5.736 - 5.760: 97.9641% ( 1) 00:15:00.728 5.784 - 5.807: 97.9716% ( 1) 00:15:00.728 5.831 - 5.855: 97.9791% ( 1) 00:15:00.728 5.926 - 5.950: 97.9941% ( 2) 00:15:00.728 5.950 - 5.973: 98.0092% ( 2) 00:15:00.728 5.997 - 6.021: 98.0167% ( 1) 00:15:00.728 6.021 - 6.044: 98.0242% ( 1) 00:15:00.728 6.044 - 6.068: 98.0317% ( 1) 00:15:00.728 6.068 - 6.116: 98.0392% ( 1) 00:15:00.728 6.116 - 6.163: 98.0467% ( 1) 00:15:00.728 6.258 - 6.305: 98.0542% ( 1) 00:15:00.728 6.305 - 6.353: 98.0618% ( 1) 00:15:00.728 6.400 - 6.447: 98.0693% ( 1) 00:15:00.728 6.779 - 6.827: 98.0843% ( 2) 00:15:00.728 6.827 - 6.874: 98.0993% ( 2) 00:15:00.728 6.921 - 6.969: 98.1068% ( 1) 00:15:00.728 6.969 - 7.016: 98.1219% ( 2) 00:15:00.728 7.064 - 7.111: 98.1294% ( 1) 00:15:00.728 7.253 - 7.301: 98.1519% ( 3) 00:15:00.728 7.348 - 7.396: 98.1744% ( 3) 00:15:00.728 7.396 - 7.443: 98.1895% ( 2) 00:15:00.728 7.443 - 7.490: 98.2045% ( 2) 00:15:00.728 7.490 - 7.538: 98.2120% ( 1) 00:15:00.728 7.538 - 7.585: 98.2195% ( 1) 00:15:00.728 7.633 - 7.680: 98.2270% ( 1) 00:15:00.728 7.680 - 7.727: 98.2345% ( 1) 00:15:00.728 7.727 - 7.775: 98.2796% ( 6) 00:15:00.728 7.775 - 7.822: 98.2871% ( 1) 00:15:00.728 7.822 - 7.870: 98.2946% ( 1) 00:15:00.728 7.870 - 7.917: 98.3172% ( 3) 00:15:00.728 7.917 - 7.964: 98.3247% ( 1) 00:15:00.728 8.012 - 8.059: 98.3472% ( 3) 00:15:00.728 8.201 - 8.249: 98.3547% ( 1) 00:15:00.728 8.249 - 8.296: 98.3848% ( 4) 00:15:00.728 8.296 - 8.344: 98.3923% ( 1) 00:15:00.728 8.344 - 8.391: 98.3998% ( 1) 00:15:00.728 8.391 - 8.439: 98.4224% ( 3) 00:15:00.728 8.439 - 8.486: 98.4374% ( 2) 00:15:00.728 8.676 - 8.723: 98.4524% ( 2) 00:15:00.728 8.818 - 8.865: 98.4599% ( 1) 00:15:00.728 8.865 - 8.913: 98.4674% ( 1) 00:15:00.728 8.913 - 8.960: 98.4749% ( 1) 00:15:00.728 8.960 - 9.007: 98.4825% ( 1) 00:15:00.728 9.055 - 9.102: 98.4900% ( 1) 00:15:00.728 9.102 - 9.150: 98.4975% ( 1) 00:15:00.728 9.292 - 9.339: 98.5125% ( 2) 00:15:00.728 9.339 - 9.387: 98.5200% ( 1) 00:15:00.728 9.434 - 9.481: 98.5275% ( 1) 00:15:00.728 9.481 - 9.529: 98.5350% ( 1) 00:15:00.728 9.671 - 9.719: 98.5426% ( 1) 00:15:00.728 9.766 - 9.813: 98.5501% ( 1) 00:15:00.728 9.861 - 9.908: 98.5576% ( 1) 00:15:00.728 10.145 - 10.193: 98.5651% ( 1) 00:15:00.728 10.477 - 10.524: 98.5726% ( 1) 00:15:00.728 10.524 - 10.572: 98.5801% ( 1) 00:15:00.728 10.619 - 10.667: 98.5876% ( 1) 00:15:00.728 10.809 - 10.856: 98.5951% ( 1) 00:15:00.728 11.093 - 11.141: 98.6027% ( 1) 00:15:00.728 11.188 - 11.236: 98.6102% ( 1) 00:15:00.728 11.236 - 11.283: 98.6177% ( 1) 00:15:00.728 11.378 - 11.425: 98.6252% ( 1) 00:15:00.728 11.520 - 11.567: 98.6327% ( 1) 00:15:00.728 11.615 - 11.662: 98.6477% ( 2) 00:15:00.728 11.662 - 11.710: 98.6552% ( 1) 00:15:00.728 11.757 - 11.804: 98.6628% ( 1) 00:15:00.728 11.852 - 11.899: 98.6703% ( 1) 00:15:00.728 11.994 - 12.041: 98.6853% ( 2) 00:15:00.728 12.136 - 12.231: 98.6928% ( 1) 00:15:00.728 12.231 - 12.326: 98.7078% ( 2) 00:15:00.728 12.326 - 12.421: 98.7229% ( 2) 00:15:00.728 12.516 - 12.610: 98.7304% ( 1) 00:15:00.728 12.610 - 12.705: 98.7454% ( 2) 00:15:00.728 12.705 - 12.800: 98.7529% ( 1) 00:15:00.728 12.895 - 12.990: 98.7679% ( 2) 00:15:00.728 12.990 - 13.084: 98.7754% ( 1) 00:15:00.728 13.464 - 13.559: 98.7830% ( 1) 00:15:00.728 13.559 - 13.653: 98.7905% ( 1) 00:15:00.728 13.938 - 14.033: 98.8055% ( 2) 00:15:00.728 14.222 - 14.317: 98.8130% ( 1) 00:15:00.728 14.317 - 14.412: 98.8280% ( 2) 00:15:00.728 14.601 - 14.696: 98.8355% ( 1) 00:15:00.728 14.791 - 14.886: 98.8431% ( 1) 00:15:00.728 14.886 - 14.981: 98.8506% ( 1) 00:15:00.728 14.981 - 15.076: 98.8581% ( 1) 00:15:00.728 15.170 - 15.265: 98.8656% ( 1) 00:15:00.728 17.067 - 17.161: 98.8731% ( 1) 00:15:00.728 17.161 - 17.256: 98.8806% ( 1) 00:15:00.728 17.351 - 17.446: 98.9032% ( 3) 00:15:00.728 17.446 - 17.541: 98.9257% ( 3) 00:15:00.728 17.541 - 17.636: 98.9482% ( 3) 00:15:00.728 17.636 - 17.730: 98.9633% ( 2) 00:15:00.728 17.730 - 17.825: 99.0159% ( 7) 00:15:00.728 17.825 - 17.920: 99.0609% ( 6) 00:15:00.728 17.920 - 18.015: 99.1135% ( 7) 00:15:00.728 18.015 - 18.110: 99.1811% ( 9) 00:15:00.728 18.110 - 18.204: 99.3164% ( 18) 00:15:00.728 18.204 - 18.299: 99.4065% ( 12) 00:15:00.728 18.299 - 18.394: 99.4441% ( 5) 00:15:00.728 18.394 - 18.489: 99.4891% ( 6) 00:15:00.728 18.489 - 18.584: 99.5267% ( 5) 00:15:00.728 18.584 - 18.679: 99.5342% ( 1) 00:15:00.728 18.679 - 18.773: 99.5943% ( 8) 00:15:00.728 18.773 - 18.868: 99.6093% ( 2) 00:15:00.728 18.868 - 18.963: 99.6394% ( 4) 00:15:00.728 18.963 - 19.058: 99.6619% ( 3) 00:15:00.728 19.058 - 19.153: 99.6920% ( 4) 00:15:00.728 19.153 - 19.247: 99.7145% ( 3) 00:15:00.728 19.342 - 19.437: 99.7521% ( 5) 00:15:00.728 19.437 - 19.532: 99.7671% ( 2) 00:15:00.728 19.532 - 19.627: 99.7821% ( 2) 00:15:00.728 19.627 - 19.721: 99.7972% ( 2) 00:15:00.728 19.911 - 20.006: 99.8122% ( 2) 00:15:00.728 20.575 - 20.670: 99.8197% ( 1) 00:15:00.728 21.239 - 21.333: 99.8272% ( 1) 00:15:00.728 21.713 - 21.807: 99.8422% ( 2) 00:15:00.728 22.471 - 22.566: 99.8497% ( 1) 00:15:00.728 23.988 - 24.083: 99.8573% ( 1) 00:15:00.728 24.083 - 24.178: 99.8648% ( 1) 00:15:00.728 27.117 - 27.307: 99.8723% ( 1) 00:15:00.728 3980.705 - 4004.978: 99.9699% ( 13) 00:15:00.728 4004.978 - 4029.250: 100.0000% ( 4) 00:15:00.728 00:15:00.728 Complete histogram 00:15:00.728 ================== 00:15:00.728 Range in us Cumulative Count 00:15:00.728 2.062 - 2.074: 0.0526% ( 7) 00:15:00.728 2.074 - 2.086: 24.3107% ( 3229) 00:15:00.728 2.086 - 2.098: 48.8844% ( 3271) 00:15:00.728 2.098 - 2.110: 51.6340% ( 366) 00:15:00.728 2.110 - 2.121: 58.9512% ( 974) 00:15:00.728 2.121 - 2.133: 61.9337% ( 397) 00:15:00.728 2.133 - 2.145: 63.9321% ( 266) 00:15:00.728 2.145 - 2.157: 72.6993% ( 1167) 00:15:00.728 2.157 - 2.169: 76.7936% ( 545) 00:15:00.728 2.169 - 2.181: 77.8304% ( 138) 00:15:00.728 2.181 - 2.193: 80.5574% ( 363) 00:15:00.728 2.193 - 2.204: 81.7219% ( 155) 00:15:00.728 2.204 - 2.216: 82.2853% ( 75) 00:15:00.728 2.216 - 2.228: 86.2144% ( 523) 00:15:00.728 2.228 - 2.240: 88.7612% ( 339) 00:15:00.728 2.240 - 2.252: 90.9098% ( 286) 00:15:00.728 2.252 - 2.264: 92.4799% ( 209) 00:15:00.728 2.264 - 2.276: 93.0809% ( 80) 00:15:00.728 2.276 - 2.287: 93.3889% ( 41) 00:15:00.728 2.287 - 2.299: 93.6444% ( 34) 00:15:00.728 2.299 - 2.311: 93.9749% ( 44) 00:15:00.728 2.311 - 2.323: 94.6886% ( 95) 00:15:00.728 2.323 - 2.335: 94.8313% ( 19) 00:15:00.728 2.335 - 2.347: 94.9515% ( 16) 00:15:00.728 2.347 - 2.359: 95.0192% ( 9) 00:15:00.728 2.359 - 2.370: 95.1093% ( 12) 00:15:00.728 2.370 - 2.382: 95.1694% ( 8) 00:15:00.728 2.382 - 2.394: 95.5150% ( 46) 00:15:00.728 2.394 - 2.406: 95.9282% ( 55) 00:15:00.728 2.406 - 2.418: 96.3414% ( 55) 00:15:00.728 2.418 - 2.430: 96.6043% ( 35) 00:15:00.728 2.430 - 2.441: 96.8147% ( 28) 00:15:00.728 2.441 - 2.453: 96.9799% ( 22) 00:15:00.728 2.453 - 2.465: 97.1077% ( 17) 00:15:00.728 2.465 - 2.477: 97.2279% ( 16) 00:15:00.728 2.477 - 2.489: 97.3255% ( 13) 00:15:00.728 2.489 - 2.501: 97.4232% ( 13) 00:15:00.728 2.501 - 2.513: 97.5434% ( 16) 00:15:00.728 2.513 - 2.524: 97.6335% ( 12) 00:15:00.728 2.524 - 2.536: 97.6636% ( 4) 00:15:00.728 2.536 - 2.548: 97.6861% ( 3) 00:15:00.728 2.548 - 2.560: 97.7011% ( 2) 00:15:00.728 2.572 - 2.584: 97.7162% ( 2) 00:15:00.728 2.584 - 2.596: 97.7237% ( 1) 00:15:00.728 2.596 - 2.607: 97.7312% ( 1) 00:15:00.728 2.607 - 2.619: 97.7387% ( 1) 00:15:00.728 2.631 - 2.643: 97.7613% ( 3) 00:15:00.728 2.643 - 2.655: 97.7988% ( 5) 00:15:00.728 2.679 - 2.690: 97.8214% ( 3) 00:15:00.728 2.690 - 2.702: 97.8364% ( 2) 00:15:00.728 2.702 - 2.714: 97.8439% ( 1) 00:15:00.728 2.714 - 2.726: 97.8514% ( 1) 00:15:00.728 2.726 - 2.738: 97.8664% ( 2) 00:15:00.728 2.738 - 2.750: 97.9040% ( 5) 00:15:00.728 2.750 - 2.761: 97.9115% ( 1) 00:15:00.728 2.761 - 2.773: 97.9491% ( 5) 00:15:00.728 2.773 - 2.785: 97.9641% ( 2) 00:15:00.728 2.785 - 2.797: 97.9791% ( 2) 00:15:00.729 2.821 - 2.833: 97.9941% ( 2) 00:15:00.729 2.833 - 2.844: 98.0092% ( 2) 00:15:00.729 2.856 - 2.868: 98.0242% ( 2) 00:15:00.729 2.868 - 2.880: 98.0317% ( 1) 00:15:00.729 2.880 - 2.892: 98.0392% ( 1) 00:15:00.729 2.892 - 2.904: 98.0618% ( 3) 00:15:00.729 2.904 - 2.916: 98.0693% ( 1) 00:15:00.729 2.927 - 2.939: 98.0843% ( 2) 00:15:00.729 2.951 - 2.963: 98.0918% ( 1) 00:15:00.729 3.022 - 3.034: 98.0993% ( 1) 00:15:00.729 3.034 - 3.058: 98.1068% ( 1) 00:15:00.729 3.058 - 3.081: 98.1143% ( 1) 00:15:00.729 3.081 - 3.105: 98.1369% ( 3) 00:15:00.729 3.105 - 3.129: 98.1594% ( 3) 00:15:00.729 3.129 - 3.153: 98.1820% ( 3) 00:15:00.729 3.153 - 3.176: 98.2045% ( 3) 00:15:00.729 3.200 - 3.224: 98.2270% ( 3) 00:15:00.729 3.224 - 3.247: 98.2571% ( 4) 00:15:00.729 3.247 - 3.271: 98.2796% ( 3) 00:15:00.729 3.295 - 3.319: 98.2946% ( 2) 00:15:00.729 3.319 - 3.342: 98.3172% ( 3) 00:15:00.729 3.342 - 3.366: 98.3773% ( 8) 00:15:00.729 3.366 - 3.390: 98.3923% ( 2) 00:15:00.729 3.390 - 3.413: 98.4148% ( 3) 00:15:00.729 3.413 - 3.437: 98.4374% ( 3) 00:15:00.729 3.437 - 3.461: 98.4674% ( 4) 00:15:00.729 3.461 - 3.484: 98.5050% ( 5) 00:15:00.729 3.484 - 3.508: 98.5125% ( 1) 00:15:00.729 3.508 - 3.532: 98.5200% ( 1) 00:15:00.729 3.532 - 3.556: 98.5350% ( 2) 00:15:00.729 3.603 - 3.627: 98.5501% ( 2) 00:15:00.729 3.650 - 3.674: 98.5651% ( 2) 00:15:00.729 3.674 - 3.698: 98.5876% ( 3) 00:15:00.729 3.698 - 3.721: 98.6027% ( 2) 00:15:00.729 3.721 - 3.745: 98.6177% ( 2) 00:15:00.729 3.769 - 3.793: 98.6252% ( 1) 00:15:00.729 3.816 - 3.840: 98.6402% ( 2) 00:15:00.729 3.840 - 3.864: 98.6552% ( 2) 00:15:00.729 3.864 - 3.887: 98.6628% ( 1) 00:15:00.729 3.887 - 3.911: 98.6703% ( 1) 00:15:00.729 4.148 - 4.172: 98.6778% ( 1) 00:15:00.729 4.456 - 4.480: 98.6853% ( 1) 00:15:00.729 5.073 - 5.096: 98.6928% ( 1) 00:15:00.729 5.286 - 5.310: 98.7003% ( 1) 00:15:00.729 5.381 - 5.404: 98.7078% ( 1) 00:15:00.729 5.404 - 5.428: 98.7153% ( 1) 00:15:00.729 5.523 - 5.547: 98.7229% ( 1) 00:15:00.729 5.618 - 5.641: 98.7304% ( 1) 00:15:00.729 6.068 - 6.116: 98.7379% ( 1) 00:15:00.729 6.163 - 6.210: 98.7454% ( 1) 00:15:00.729 6.305 - 6.353: 98.7529% ( 1) 00:15:00.729 6.353 - 6.400: 98.7604% ( 1) 00:15:00.729 6.400 - 6.447: 98.7679% ( 1) 00:15:00.729 6.542 - 6.590: 98.7754% ( 1) 00:15:00.729 6.779 - 6.827: 98.7830% ( 1) 00:15:00.729 6.827 - 6.874: 98.7905% ( 1) 00:15:00.729 6.921 - 6.969: 98.7980% ( 1) 00:15:00.729 8.818 - 8.865: 98.8055% ( 1) 00:15:00.729 12.421 - 12.516: 98.8130% ( 1) 00:15:00.729 14.033 - 14.127: 98.8205% ( 1) 00:15:00.729 15.550 - 15.644: 98.8280% ( 1) 00:15:00.729 15.644 - 15.739: 98.8355% ( 1) 00:15:00.729 15.739 - 15.834: 98.8431% ( 1) 00:15:00.729 15.834 - 15.929: 98.8731% ( 4) 00:15:00.729 15.929 - 16.024: 98.9032% ( 4) 00:15:00.729 16.024 - 16.119: 98.9558% ( 7) 00:15:00.729 16.119 - 16.213: 98.9708% ( 2) 00:15:00.729 16.213 - 16.308: 99.0309% ( 8) 00:15:00.729 16.308 - 16.403: 99.0835% ( 7) 00:15:00.729 16.403 - 16.498: 99.1210%[2024-07-13 08:01:52.048666] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:00.729 ( 5) 00:15:00.729 16.498 - 16.593: 99.1736% ( 7) 00:15:00.729 16.593 - 16.687: 99.2262% ( 7) 00:15:00.729 16.687 - 16.782: 99.2713% ( 6) 00:15:00.729 16.782 - 16.877: 99.3088% ( 5) 00:15:00.729 16.877 - 16.972: 99.3164% ( 1) 00:15:00.729 17.067 - 17.161: 99.3539% ( 5) 00:15:00.729 17.161 - 17.256: 99.3765% ( 3) 00:15:00.729 17.256 - 17.351: 99.3840% ( 1) 00:15:00.729 17.351 - 17.446: 99.3990% ( 2) 00:15:00.729 17.446 - 17.541: 99.4140% ( 2) 00:15:00.729 18.204 - 18.299: 99.4215% ( 1) 00:15:00.729 18.299 - 18.394: 99.4290% ( 1) 00:15:00.729 18.394 - 18.489: 99.4441% ( 2) 00:15:00.729 21.713 - 21.807: 99.4516% ( 1) 00:15:00.729 24.273 - 24.462: 99.4591% ( 1) 00:15:00.729 33.944 - 34.133: 99.4666% ( 1) 00:15:00.729 3398.163 - 3422.436: 99.4741% ( 1) 00:15:00.729 3980.705 - 4004.978: 99.8723% ( 53) 00:15:00.729 4004.978 - 4029.250: 100.0000% ( 17) 00:15:00.729 00:15:00.729 08:01:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:00.729 08:01:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:00.729 08:01:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:00.729 08:01:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:00.729 08:01:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:00.729 [ 00:15:00.729 { 00:15:00.729 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:00.729 "subtype": "Discovery", 00:15:00.729 "listen_addresses": [], 00:15:00.729 "allow_any_host": true, 00:15:00.729 "hosts": [] 00:15:00.729 }, 00:15:00.729 { 00:15:00.729 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:00.729 "subtype": "NVMe", 00:15:00.729 "listen_addresses": [ 00:15:00.729 { 00:15:00.729 "trtype": "VFIOUSER", 00:15:00.729 "adrfam": "IPv4", 00:15:00.729 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:00.729 "trsvcid": "0" 00:15:00.729 } 00:15:00.729 ], 00:15:00.729 "allow_any_host": true, 00:15:00.729 "hosts": [], 00:15:00.729 "serial_number": "SPDK1", 00:15:00.729 "model_number": "SPDK bdev Controller", 00:15:00.729 "max_namespaces": 32, 00:15:00.729 "min_cntlid": 1, 00:15:00.729 "max_cntlid": 65519, 00:15:00.729 "namespaces": [ 00:15:00.729 { 00:15:00.729 "nsid": 1, 00:15:00.729 "bdev_name": "Malloc1", 00:15:00.729 "name": "Malloc1", 00:15:00.729 "nguid": "DD421394E8D94743ADFE86FB1BB410A7", 00:15:00.729 "uuid": "dd421394-e8d9-4743-adfe-86fb1bb410a7" 00:15:00.729 } 00:15:00.729 ] 00:15:00.729 }, 00:15:00.729 { 00:15:00.729 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:00.729 "subtype": "NVMe", 00:15:00.729 "listen_addresses": [ 00:15:00.729 { 00:15:00.729 "trtype": "VFIOUSER", 00:15:00.729 "adrfam": "IPv4", 00:15:00.729 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:00.729 "trsvcid": "0" 00:15:00.729 } 00:15:00.729 ], 00:15:00.729 "allow_any_host": true, 00:15:00.729 "hosts": [], 00:15:00.729 "serial_number": "SPDK2", 00:15:00.729 "model_number": "SPDK bdev Controller", 00:15:00.729 "max_namespaces": 32, 00:15:00.729 "min_cntlid": 1, 00:15:00.729 "max_cntlid": 65519, 00:15:00.729 "namespaces": [ 00:15:00.729 { 00:15:00.729 "nsid": 1, 00:15:00.729 "bdev_name": "Malloc2", 00:15:00.729 "name": "Malloc2", 00:15:00.729 "nguid": "D76EF58F8C434FE691D1390BB7D2AB1B", 00:15:00.729 "uuid": "d76ef58f-8c43-4fe6-91d1-390bb7d2ab1b" 00:15:00.729 } 00:15:00.729 ] 00:15:00.729 } 00:15:00.729 ] 00:15:00.729 08:01:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:00.729 08:01:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1920226 00:15:00.729 08:01:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:00.729 08:01:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:00.729 08:01:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:00.729 08:01:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:00.729 08:01:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:00.729 08:01:52 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:00.729 08:01:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:00.729 08:01:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:00.729 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.987 [2024-07-13 08:01:52.510379] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:00.987 Malloc3 00:15:00.987 08:01:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:01.242 [2024-07-13 08:01:52.864816] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:01.243 08:01:52 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:01.243 Asynchronous Event Request test 00:15:01.243 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:01.243 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:01.243 Registering asynchronous event callbacks... 00:15:01.243 Starting namespace attribute notice tests for all controllers... 00:15:01.243 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:01.243 aer_cb - Changed Namespace 00:15:01.243 Cleaning up... 00:15:01.499 [ 00:15:01.499 { 00:15:01.499 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:01.499 "subtype": "Discovery", 00:15:01.499 "listen_addresses": [], 00:15:01.499 "allow_any_host": true, 00:15:01.499 "hosts": [] 00:15:01.499 }, 00:15:01.499 { 00:15:01.499 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:01.499 "subtype": "NVMe", 00:15:01.499 "listen_addresses": [ 00:15:01.499 { 00:15:01.499 "trtype": "VFIOUSER", 00:15:01.499 "adrfam": "IPv4", 00:15:01.499 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:01.499 "trsvcid": "0" 00:15:01.499 } 00:15:01.499 ], 00:15:01.499 "allow_any_host": true, 00:15:01.499 "hosts": [], 00:15:01.499 "serial_number": "SPDK1", 00:15:01.499 "model_number": "SPDK bdev Controller", 00:15:01.499 "max_namespaces": 32, 00:15:01.499 "min_cntlid": 1, 00:15:01.499 "max_cntlid": 65519, 00:15:01.499 "namespaces": [ 00:15:01.499 { 00:15:01.499 "nsid": 1, 00:15:01.499 "bdev_name": "Malloc1", 00:15:01.499 "name": "Malloc1", 00:15:01.499 "nguid": "DD421394E8D94743ADFE86FB1BB410A7", 00:15:01.499 "uuid": "dd421394-e8d9-4743-adfe-86fb1bb410a7" 00:15:01.499 }, 00:15:01.499 { 00:15:01.499 "nsid": 2, 00:15:01.499 "bdev_name": "Malloc3", 00:15:01.499 "name": "Malloc3", 00:15:01.499 "nguid": "552B3CA6A325452BB48522DBCAA922E3", 00:15:01.499 "uuid": "552b3ca6-a325-452b-b485-22dbcaa922e3" 00:15:01.499 } 00:15:01.499 ] 00:15:01.499 }, 00:15:01.499 { 00:15:01.499 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:01.499 "subtype": "NVMe", 00:15:01.499 "listen_addresses": [ 00:15:01.499 { 00:15:01.499 "trtype": "VFIOUSER", 00:15:01.499 "adrfam": "IPv4", 00:15:01.499 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:01.499 "trsvcid": "0" 00:15:01.499 } 00:15:01.499 ], 00:15:01.499 "allow_any_host": true, 00:15:01.499 "hosts": [], 00:15:01.499 "serial_number": "SPDK2", 00:15:01.499 "model_number": "SPDK bdev Controller", 00:15:01.499 "max_namespaces": 32, 00:15:01.499 "min_cntlid": 1, 00:15:01.499 "max_cntlid": 65519, 00:15:01.499 "namespaces": [ 00:15:01.499 { 00:15:01.499 "nsid": 1, 00:15:01.499 "bdev_name": "Malloc2", 00:15:01.499 "name": "Malloc2", 00:15:01.499 "nguid": "D76EF58F8C434FE691D1390BB7D2AB1B", 00:15:01.499 "uuid": "d76ef58f-8c43-4fe6-91d1-390bb7d2ab1b" 00:15:01.499 } 00:15:01.499 ] 00:15:01.499 } 00:15:01.499 ] 00:15:01.499 08:01:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1920226 00:15:01.499 08:01:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:01.499 08:01:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:01.499 08:01:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:01.499 08:01:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:01.499 [2024-07-13 08:01:53.142260] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:01.499 [2024-07-13 08:01:53.142307] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1920237 ] 00:15:01.499 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.499 [2024-07-13 08:01:53.178011] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:01.499 [2024-07-13 08:01:53.187967] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:01.499 [2024-07-13 08:01:53.187997] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb52a491000 00:15:01.499 [2024-07-13 08:01:53.188975] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.499 [2024-07-13 08:01:53.189980] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.499 [2024-07-13 08:01:53.190986] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.499 [2024-07-13 08:01:53.191992] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:01.499 [2024-07-13 08:01:53.193000] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:01.499 [2024-07-13 08:01:53.194009] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.499 [2024-07-13 08:01:53.195017] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:01.499 [2024-07-13 08:01:53.196022] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:01.499 [2024-07-13 08:01:53.197027] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:01.499 [2024-07-13 08:01:53.197050] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb529245000 00:15:01.499 [2024-07-13 08:01:53.198207] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:01.499 [2024-07-13 08:01:53.212587] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:01.499 [2024-07-13 08:01:53.212628] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:01.499 [2024-07-13 08:01:53.217741] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:01.499 [2024-07-13 08:01:53.217791] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:01.499 [2024-07-13 08:01:53.217898] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:01.499 [2024-07-13 08:01:53.217926] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:01.499 [2024-07-13 08:01:53.217938] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:01.499 [2024-07-13 08:01:53.218744] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:01.499 [2024-07-13 08:01:53.218765] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:01.499 [2024-07-13 08:01:53.218777] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:01.499 [2024-07-13 08:01:53.219752] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:01.499 [2024-07-13 08:01:53.219772] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:01.499 [2024-07-13 08:01:53.219786] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:01.499 [2024-07-13 08:01:53.220761] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:01.499 [2024-07-13 08:01:53.220782] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:01.499 [2024-07-13 08:01:53.221770] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:01.499 [2024-07-13 08:01:53.221791] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:01.499 [2024-07-13 08:01:53.221800] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:01.499 [2024-07-13 08:01:53.221811] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:01.499 [2024-07-13 08:01:53.221922] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:01.499 [2024-07-13 08:01:53.221933] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:01.499 [2024-07-13 08:01:53.221946] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:01.499 [2024-07-13 08:01:53.222772] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:01.499 [2024-07-13 08:01:53.223783] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:01.499 [2024-07-13 08:01:53.224788] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:01.499 [2024-07-13 08:01:53.225782] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:01.499 [2024-07-13 08:01:53.225863] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:01.499 [2024-07-13 08:01:53.226797] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:01.499 [2024-07-13 08:01:53.226818] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:01.499 [2024-07-13 08:01:53.226827] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:01.499 [2024-07-13 08:01:53.226872] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:01.499 [2024-07-13 08:01:53.226888] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:01.499 [2024-07-13 08:01:53.226911] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:01.499 [2024-07-13 08:01:53.226921] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.499 [2024-07-13 08:01:53.226941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.757 [2024-07-13 08:01:53.234892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:01.757 [2024-07-13 08:01:53.234920] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:01.757 [2024-07-13 08:01:53.234935] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:01.757 [2024-07-13 08:01:53.234944] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:01.757 [2024-07-13 08:01:53.234952] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:01.757 [2024-07-13 08:01:53.234960] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:01.757 [2024-07-13 08:01:53.234968] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:01.757 [2024-07-13 08:01:53.234976] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:01.757 [2024-07-13 08:01:53.234991] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:01.757 [2024-07-13 08:01:53.235008] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:01.757 [2024-07-13 08:01:53.242879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:01.757 [2024-07-13 08:01:53.242909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.757 [2024-07-13 08:01:53.242945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.757 [2024-07-13 08:01:53.242959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.757 [2024-07-13 08:01:53.242973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:01.757 [2024-07-13 08:01:53.242983] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:01.757 [2024-07-13 08:01:53.242999] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:01.757 [2024-07-13 08:01:53.243015] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:01.757 [2024-07-13 08:01:53.250895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:01.757 [2024-07-13 08:01:53.250914] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:01.757 [2024-07-13 08:01:53.250924] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:01.757 [2024-07-13 08:01:53.250935] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:01.757 [2024-07-13 08:01:53.250961] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:01.757 [2024-07-13 08:01:53.250976] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:01.757 [2024-07-13 08:01:53.258879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:01.757 [2024-07-13 08:01:53.258952] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:01.757 [2024-07-13 08:01:53.258969] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:01.757 [2024-07-13 08:01:53.258983] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:01.757 [2024-07-13 08:01:53.258992] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:01.757 [2024-07-13 08:01:53.259002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:01.757 [2024-07-13 08:01:53.266879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:01.757 [2024-07-13 08:01:53.266903] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:01.757 [2024-07-13 08:01:53.266938] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:01.757 [2024-07-13 08:01:53.266955] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:01.757 [2024-07-13 08:01:53.266968] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:01.757 [2024-07-13 08:01:53.266976] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.757 [2024-07-13 08:01:53.266986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.757 [2024-07-13 08:01:53.274879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:01.758 [2024-07-13 08:01:53.274923] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:01.758 [2024-07-13 08:01:53.274942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:01.758 [2024-07-13 08:01:53.274955] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:01.758 [2024-07-13 08:01:53.274964] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.758 [2024-07-13 08:01:53.274974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.758 [2024-07-13 08:01:53.282891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:01.758 [2024-07-13 08:01:53.282920] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:01.758 [2024-07-13 08:01:53.282934] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:01.758 [2024-07-13 08:01:53.282948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:01.758 [2024-07-13 08:01:53.282959] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:01.758 [2024-07-13 08:01:53.282968] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:01.758 [2024-07-13 08:01:53.282977] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:01.758 [2024-07-13 08:01:53.282986] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:01.758 [2024-07-13 08:01:53.282994] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:01.758 [2024-07-13 08:01:53.283003] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:01.758 [2024-07-13 08:01:53.283028] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:01.758 [2024-07-13 08:01:53.290891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:01.758 [2024-07-13 08:01:53.290919] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:01.758 [2024-07-13 08:01:53.298877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:01.758 [2024-07-13 08:01:53.298903] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:01.758 [2024-07-13 08:01:53.306879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:01.758 [2024-07-13 08:01:53.306906] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:01.758 [2024-07-13 08:01:53.314882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:01.758 [2024-07-13 08:01:53.314919] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:01.758 [2024-07-13 08:01:53.314934] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:01.758 [2024-07-13 08:01:53.314941] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:01.758 [2024-07-13 08:01:53.314948] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:01.758 [2024-07-13 08:01:53.314958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:01.758 [2024-07-13 08:01:53.314969] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:01.758 [2024-07-13 08:01:53.314978] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:01.758 [2024-07-13 08:01:53.314987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:01.758 [2024-07-13 08:01:53.314998] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:01.758 [2024-07-13 08:01:53.315006] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:01.758 [2024-07-13 08:01:53.315015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:01.758 [2024-07-13 08:01:53.315027] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:01.758 [2024-07-13 08:01:53.315035] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:01.758 [2024-07-13 08:01:53.315044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:01.758 [2024-07-13 08:01:53.322888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:01.758 [2024-07-13 08:01:53.322916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:01.758 [2024-07-13 08:01:53.322934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:01.758 [2024-07-13 08:01:53.322946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:01.758 ===================================================== 00:15:01.758 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:01.758 ===================================================== 00:15:01.758 Controller Capabilities/Features 00:15:01.758 ================================ 00:15:01.758 Vendor ID: 4e58 00:15:01.758 Subsystem Vendor ID: 4e58 00:15:01.758 Serial Number: SPDK2 00:15:01.758 Model Number: SPDK bdev Controller 00:15:01.758 Firmware Version: 24.09 00:15:01.758 Recommended Arb Burst: 6 00:15:01.758 IEEE OUI Identifier: 8d 6b 50 00:15:01.758 Multi-path I/O 00:15:01.758 May have multiple subsystem ports: Yes 00:15:01.758 May have multiple controllers: Yes 00:15:01.758 Associated with SR-IOV VF: No 00:15:01.758 Max Data Transfer Size: 131072 00:15:01.758 Max Number of Namespaces: 32 00:15:01.758 Max Number of I/O Queues: 127 00:15:01.758 NVMe Specification Version (VS): 1.3 00:15:01.758 NVMe Specification Version (Identify): 1.3 00:15:01.758 Maximum Queue Entries: 256 00:15:01.758 Contiguous Queues Required: Yes 00:15:01.758 Arbitration Mechanisms Supported 00:15:01.758 Weighted Round Robin: Not Supported 00:15:01.758 Vendor Specific: Not Supported 00:15:01.758 Reset Timeout: 15000 ms 00:15:01.758 Doorbell Stride: 4 bytes 00:15:01.758 NVM Subsystem Reset: Not Supported 00:15:01.758 Command Sets Supported 00:15:01.758 NVM Command Set: Supported 00:15:01.758 Boot Partition: Not Supported 00:15:01.758 Memory Page Size Minimum: 4096 bytes 00:15:01.758 Memory Page Size Maximum: 4096 bytes 00:15:01.758 Persistent Memory Region: Not Supported 00:15:01.758 Optional Asynchronous Events Supported 00:15:01.758 Namespace Attribute Notices: Supported 00:15:01.758 Firmware Activation Notices: Not Supported 00:15:01.758 ANA Change Notices: Not Supported 00:15:01.758 PLE Aggregate Log Change Notices: Not Supported 00:15:01.758 LBA Status Info Alert Notices: Not Supported 00:15:01.758 EGE Aggregate Log Change Notices: Not Supported 00:15:01.758 Normal NVM Subsystem Shutdown event: Not Supported 00:15:01.758 Zone Descriptor Change Notices: Not Supported 00:15:01.758 Discovery Log Change Notices: Not Supported 00:15:01.758 Controller Attributes 00:15:01.758 128-bit Host Identifier: Supported 00:15:01.758 Non-Operational Permissive Mode: Not Supported 00:15:01.758 NVM Sets: Not Supported 00:15:01.758 Read Recovery Levels: Not Supported 00:15:01.758 Endurance Groups: Not Supported 00:15:01.758 Predictable Latency Mode: Not Supported 00:15:01.758 Traffic Based Keep ALive: Not Supported 00:15:01.758 Namespace Granularity: Not Supported 00:15:01.758 SQ Associations: Not Supported 00:15:01.758 UUID List: Not Supported 00:15:01.758 Multi-Domain Subsystem: Not Supported 00:15:01.758 Fixed Capacity Management: Not Supported 00:15:01.758 Variable Capacity Management: Not Supported 00:15:01.758 Delete Endurance Group: Not Supported 00:15:01.758 Delete NVM Set: Not Supported 00:15:01.758 Extended LBA Formats Supported: Not Supported 00:15:01.758 Flexible Data Placement Supported: Not Supported 00:15:01.758 00:15:01.758 Controller Memory Buffer Support 00:15:01.758 ================================ 00:15:01.758 Supported: No 00:15:01.758 00:15:01.758 Persistent Memory Region Support 00:15:01.758 ================================ 00:15:01.758 Supported: No 00:15:01.758 00:15:01.758 Admin Command Set Attributes 00:15:01.758 ============================ 00:15:01.758 Security Send/Receive: Not Supported 00:15:01.758 Format NVM: Not Supported 00:15:01.758 Firmware Activate/Download: Not Supported 00:15:01.758 Namespace Management: Not Supported 00:15:01.758 Device Self-Test: Not Supported 00:15:01.758 Directives: Not Supported 00:15:01.758 NVMe-MI: Not Supported 00:15:01.758 Virtualization Management: Not Supported 00:15:01.758 Doorbell Buffer Config: Not Supported 00:15:01.758 Get LBA Status Capability: Not Supported 00:15:01.758 Command & Feature Lockdown Capability: Not Supported 00:15:01.758 Abort Command Limit: 4 00:15:01.758 Async Event Request Limit: 4 00:15:01.758 Number of Firmware Slots: N/A 00:15:01.758 Firmware Slot 1 Read-Only: N/A 00:15:01.758 Firmware Activation Without Reset: N/A 00:15:01.758 Multiple Update Detection Support: N/A 00:15:01.758 Firmware Update Granularity: No Information Provided 00:15:01.758 Per-Namespace SMART Log: No 00:15:01.758 Asymmetric Namespace Access Log Page: Not Supported 00:15:01.758 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:01.758 Command Effects Log Page: Supported 00:15:01.758 Get Log Page Extended Data: Supported 00:15:01.758 Telemetry Log Pages: Not Supported 00:15:01.758 Persistent Event Log Pages: Not Supported 00:15:01.758 Supported Log Pages Log Page: May Support 00:15:01.758 Commands Supported & Effects Log Page: Not Supported 00:15:01.758 Feature Identifiers & Effects Log Page:May Support 00:15:01.758 NVMe-MI Commands & Effects Log Page: May Support 00:15:01.758 Data Area 4 for Telemetry Log: Not Supported 00:15:01.758 Error Log Page Entries Supported: 128 00:15:01.758 Keep Alive: Supported 00:15:01.758 Keep Alive Granularity: 10000 ms 00:15:01.758 00:15:01.758 NVM Command Set Attributes 00:15:01.758 ========================== 00:15:01.758 Submission Queue Entry Size 00:15:01.758 Max: 64 00:15:01.758 Min: 64 00:15:01.758 Completion Queue Entry Size 00:15:01.758 Max: 16 00:15:01.758 Min: 16 00:15:01.758 Number of Namespaces: 32 00:15:01.758 Compare Command: Supported 00:15:01.758 Write Uncorrectable Command: Not Supported 00:15:01.758 Dataset Management Command: Supported 00:15:01.758 Write Zeroes Command: Supported 00:15:01.758 Set Features Save Field: Not Supported 00:15:01.758 Reservations: Not Supported 00:15:01.758 Timestamp: Not Supported 00:15:01.758 Copy: Supported 00:15:01.758 Volatile Write Cache: Present 00:15:01.758 Atomic Write Unit (Normal): 1 00:15:01.758 Atomic Write Unit (PFail): 1 00:15:01.758 Atomic Compare & Write Unit: 1 00:15:01.758 Fused Compare & Write: Supported 00:15:01.758 Scatter-Gather List 00:15:01.758 SGL Command Set: Supported (Dword aligned) 00:15:01.758 SGL Keyed: Not Supported 00:15:01.758 SGL Bit Bucket Descriptor: Not Supported 00:15:01.758 SGL Metadata Pointer: Not Supported 00:15:01.758 Oversized SGL: Not Supported 00:15:01.758 SGL Metadata Address: Not Supported 00:15:01.758 SGL Offset: Not Supported 00:15:01.758 Transport SGL Data Block: Not Supported 00:15:01.758 Replay Protected Memory Block: Not Supported 00:15:01.758 00:15:01.758 Firmware Slot Information 00:15:01.758 ========================= 00:15:01.758 Active slot: 1 00:15:01.758 Slot 1 Firmware Revision: 24.09 00:15:01.758 00:15:01.758 00:15:01.758 Commands Supported and Effects 00:15:01.758 ============================== 00:15:01.758 Admin Commands 00:15:01.758 -------------- 00:15:01.758 Get Log Page (02h): Supported 00:15:01.758 Identify (06h): Supported 00:15:01.758 Abort (08h): Supported 00:15:01.758 Set Features (09h): Supported 00:15:01.758 Get Features (0Ah): Supported 00:15:01.758 Asynchronous Event Request (0Ch): Supported 00:15:01.758 Keep Alive (18h): Supported 00:15:01.758 I/O Commands 00:15:01.758 ------------ 00:15:01.758 Flush (00h): Supported LBA-Change 00:15:01.758 Write (01h): Supported LBA-Change 00:15:01.758 Read (02h): Supported 00:15:01.758 Compare (05h): Supported 00:15:01.758 Write Zeroes (08h): Supported LBA-Change 00:15:01.758 Dataset Management (09h): Supported LBA-Change 00:15:01.758 Copy (19h): Supported LBA-Change 00:15:01.758 00:15:01.758 Error Log 00:15:01.758 ========= 00:15:01.758 00:15:01.758 Arbitration 00:15:01.758 =========== 00:15:01.758 Arbitration Burst: 1 00:15:01.758 00:15:01.758 Power Management 00:15:01.758 ================ 00:15:01.758 Number of Power States: 1 00:15:01.758 Current Power State: Power State #0 00:15:01.758 Power State #0: 00:15:01.758 Max Power: 0.00 W 00:15:01.758 Non-Operational State: Operational 00:15:01.758 Entry Latency: Not Reported 00:15:01.758 Exit Latency: Not Reported 00:15:01.758 Relative Read Throughput: 0 00:15:01.758 Relative Read Latency: 0 00:15:01.758 Relative Write Throughput: 0 00:15:01.758 Relative Write Latency: 0 00:15:01.758 Idle Power: Not Reported 00:15:01.758 Active Power: Not Reported 00:15:01.758 Non-Operational Permissive Mode: Not Supported 00:15:01.758 00:15:01.758 Health Information 00:15:01.758 ================== 00:15:01.758 Critical Warnings: 00:15:01.758 Available Spare Space: OK 00:15:01.758 Temperature: OK 00:15:01.758 Device Reliability: OK 00:15:01.758 Read Only: No 00:15:01.758 Volatile Memory Backup: OK 00:15:01.758 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:01.758 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:01.758 Available Spare: 0% 00:15:01.758 Available Sp[2024-07-13 08:01:53.323067] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:01.758 [2024-07-13 08:01:53.330882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:01.759 [2024-07-13 08:01:53.330939] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:01.759 [2024-07-13 08:01:53.330959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.759 [2024-07-13 08:01:53.330971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.759 [2024-07-13 08:01:53.330981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.759 [2024-07-13 08:01:53.330991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:01.759 [2024-07-13 08:01:53.331056] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:01.759 [2024-07-13 08:01:53.331078] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:01.759 [2024-07-13 08:01:53.332056] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:01.759 [2024-07-13 08:01:53.332127] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:01.759 [2024-07-13 08:01:53.332174] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:01.759 [2024-07-13 08:01:53.333063] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:01.759 [2024-07-13 08:01:53.333088] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:01.759 [2024-07-13 08:01:53.333139] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:01.759 [2024-07-13 08:01:53.334322] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:01.759 are Threshold: 0% 00:15:01.759 Life Percentage Used: 0% 00:15:01.759 Data Units Read: 0 00:15:01.759 Data Units Written: 0 00:15:01.759 Host Read Commands: 0 00:15:01.759 Host Write Commands: 0 00:15:01.759 Controller Busy Time: 0 minutes 00:15:01.759 Power Cycles: 0 00:15:01.759 Power On Hours: 0 hours 00:15:01.759 Unsafe Shutdowns: 0 00:15:01.759 Unrecoverable Media Errors: 0 00:15:01.759 Lifetime Error Log Entries: 0 00:15:01.759 Warning Temperature Time: 0 minutes 00:15:01.759 Critical Temperature Time: 0 minutes 00:15:01.759 00:15:01.759 Number of Queues 00:15:01.759 ================ 00:15:01.759 Number of I/O Submission Queues: 127 00:15:01.759 Number of I/O Completion Queues: 127 00:15:01.759 00:15:01.759 Active Namespaces 00:15:01.759 ================= 00:15:01.759 Namespace ID:1 00:15:01.759 Error Recovery Timeout: Unlimited 00:15:01.759 Command Set Identifier: NVM (00h) 00:15:01.759 Deallocate: Supported 00:15:01.759 Deallocated/Unwritten Error: Not Supported 00:15:01.759 Deallocated Read Value: Unknown 00:15:01.759 Deallocate in Write Zeroes: Not Supported 00:15:01.759 Deallocated Guard Field: 0xFFFF 00:15:01.759 Flush: Supported 00:15:01.759 Reservation: Supported 00:15:01.759 Namespace Sharing Capabilities: Multiple Controllers 00:15:01.759 Size (in LBAs): 131072 (0GiB) 00:15:01.759 Capacity (in LBAs): 131072 (0GiB) 00:15:01.759 Utilization (in LBAs): 131072 (0GiB) 00:15:01.759 NGUID: D76EF58F8C434FE691D1390BB7D2AB1B 00:15:01.759 UUID: d76ef58f-8c43-4fe6-91d1-390bb7d2ab1b 00:15:01.759 Thin Provisioning: Not Supported 00:15:01.759 Per-NS Atomic Units: Yes 00:15:01.759 Atomic Boundary Size (Normal): 0 00:15:01.759 Atomic Boundary Size (PFail): 0 00:15:01.759 Atomic Boundary Offset: 0 00:15:01.759 Maximum Single Source Range Length: 65535 00:15:01.759 Maximum Copy Length: 65535 00:15:01.759 Maximum Source Range Count: 1 00:15:01.759 NGUID/EUI64 Never Reused: No 00:15:01.759 Namespace Write Protected: No 00:15:01.759 Number of LBA Formats: 1 00:15:01.759 Current LBA Format: LBA Format #00 00:15:01.759 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:01.759 00:15:01.759 08:01:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:01.759 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.016 [2024-07-13 08:01:53.560742] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:07.270 Initializing NVMe Controllers 00:15:07.270 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:07.270 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:07.270 Initialization complete. Launching workers. 00:15:07.270 ======================================================== 00:15:07.270 Latency(us) 00:15:07.270 Device Information : IOPS MiB/s Average min max 00:15:07.270 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34310.92 134.03 3729.93 1160.37 7352.75 00:15:07.270 ======================================================== 00:15:07.270 Total : 34310.92 134.03 3729.93 1160.37 7352.75 00:15:07.270 00:15:07.270 [2024-07-13 08:01:58.667280] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:07.270 08:01:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:07.270 EAL: No free 2048 kB hugepages reported on node 1 00:15:07.270 [2024-07-13 08:01:58.909844] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:12.525 Initializing NVMe Controllers 00:15:12.525 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:12.525 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:12.525 Initialization complete. Launching workers. 00:15:12.525 ======================================================== 00:15:12.525 Latency(us) 00:15:12.525 Device Information : IOPS MiB/s Average min max 00:15:12.525 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 32095.27 125.37 3987.35 1210.66 9294.85 00:15:12.525 ======================================================== 00:15:12.525 Total : 32095.27 125.37 3987.35 1210.66 9294.85 00:15:12.525 00:15:12.525 [2024-07-13 08:02:03.934485] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:12.525 08:02:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:12.525 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.525 [2024-07-13 08:02:04.141266] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:17.790 [2024-07-13 08:02:09.286025] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:17.790 Initializing NVMe Controllers 00:15:17.790 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:17.790 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:17.790 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:17.790 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:17.790 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:17.790 Initialization complete. Launching workers. 00:15:17.790 Starting thread on core 2 00:15:17.790 Starting thread on core 3 00:15:17.790 Starting thread on core 1 00:15:17.790 08:02:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:17.790 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.056 [2024-07-13 08:02:09.590813] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.340 [2024-07-13 08:02:12.675504] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:21.341 Initializing NVMe Controllers 00:15:21.341 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.341 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.341 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:21.341 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:21.341 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:21.341 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:21.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:21.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:21.341 Initialization complete. Launching workers. 00:15:21.341 Starting thread on core 1 with urgent priority queue 00:15:21.341 Starting thread on core 2 with urgent priority queue 00:15:21.341 Starting thread on core 3 with urgent priority queue 00:15:21.341 Starting thread on core 0 with urgent priority queue 00:15:21.341 SPDK bdev Controller (SPDK2 ) core 0: 4076.67 IO/s 24.53 secs/100000 ios 00:15:21.341 SPDK bdev Controller (SPDK2 ) core 1: 4875.67 IO/s 20.51 secs/100000 ios 00:15:21.341 SPDK bdev Controller (SPDK2 ) core 2: 4902.33 IO/s 20.40 secs/100000 ios 00:15:21.341 SPDK bdev Controller (SPDK2 ) core 3: 5238.33 IO/s 19.09 secs/100000 ios 00:15:21.341 ======================================================== 00:15:21.341 00:15:21.341 08:02:12 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:21.341 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.341 [2024-07-13 08:02:12.963336] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:21.341 Initializing NVMe Controllers 00:15:21.341 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.341 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:21.341 Namespace ID: 1 size: 0GB 00:15:21.341 Initialization complete. 00:15:21.341 INFO: using host memory buffer for IO 00:15:21.341 Hello world! 00:15:21.341 [2024-07-13 08:02:12.975407] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:21.341 08:02:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:21.341 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.607 [2024-07-13 08:02:13.255117] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:22.979 Initializing NVMe Controllers 00:15:22.979 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.979 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:22.979 Initialization complete. Launching workers. 00:15:22.980 submit (in ns) avg, min, max = 7551.3, 3485.6, 4017847.8 00:15:22.980 complete (in ns) avg, min, max = 25048.6, 2043.3, 5994440.0 00:15:22.980 00:15:22.980 Submit histogram 00:15:22.980 ================ 00:15:22.980 Range in us Cumulative Count 00:15:22.980 3.484 - 3.508: 0.5895% ( 78) 00:15:22.980 3.508 - 3.532: 1.6023% ( 134) 00:15:22.980 3.532 - 3.556: 4.3307% ( 361) 00:15:22.980 3.556 - 3.579: 9.7120% ( 712) 00:15:22.980 3.579 - 3.603: 17.8293% ( 1074) 00:15:22.980 3.603 - 3.627: 26.0751% ( 1091) 00:15:22.980 3.627 - 3.650: 35.4697% ( 1243) 00:15:22.980 3.650 - 3.674: 42.6272% ( 947) 00:15:22.980 3.674 - 3.698: 49.2707% ( 879) 00:15:22.980 3.698 - 3.721: 54.5537% ( 699) 00:15:22.980 3.721 - 3.745: 58.6577% ( 543) 00:15:22.980 3.745 - 3.769: 61.8699% ( 425) 00:15:22.980 3.769 - 3.793: 65.2710% ( 450) 00:15:22.980 3.793 - 3.816: 68.8837% ( 478) 00:15:22.980 3.816 - 3.840: 72.6854% ( 503) 00:15:22.980 3.840 - 3.864: 77.1219% ( 587) 00:15:22.980 3.864 - 3.887: 80.8027% ( 487) 00:15:22.980 3.887 - 3.911: 83.9014% ( 410) 00:15:22.980 3.911 - 3.935: 86.3276% ( 321) 00:15:22.980 3.935 - 3.959: 88.1793% ( 245) 00:15:22.980 3.959 - 3.982: 89.7438% ( 207) 00:15:22.980 3.982 - 4.006: 90.9228% ( 156) 00:15:22.980 4.006 - 4.030: 92.0716% ( 152) 00:15:22.980 4.030 - 4.053: 92.9257% ( 113) 00:15:22.980 4.053 - 4.077: 93.8478% ( 122) 00:15:22.980 4.077 - 4.101: 94.5431% ( 92) 00:15:22.980 4.101 - 4.124: 95.0949% ( 73) 00:15:22.980 4.124 - 4.148: 95.6164% ( 69) 00:15:22.980 4.148 - 4.172: 95.8733% ( 34) 00:15:22.980 4.172 - 4.196: 96.0925% ( 29) 00:15:22.980 4.196 - 4.219: 96.2966% ( 27) 00:15:22.980 4.219 - 4.243: 96.4175% ( 16) 00:15:22.980 4.243 - 4.267: 96.5535% ( 18) 00:15:22.980 4.267 - 4.290: 96.6291% ( 10) 00:15:22.980 4.290 - 4.314: 96.7803% ( 20) 00:15:22.980 4.314 - 4.338: 96.8861% ( 14) 00:15:22.980 4.338 - 4.361: 96.9163% ( 4) 00:15:22.980 4.361 - 4.385: 96.9466% ( 4) 00:15:22.980 4.385 - 4.409: 96.9692% ( 3) 00:15:22.980 4.409 - 4.433: 96.9995% ( 4) 00:15:22.980 4.433 - 4.456: 97.0373% ( 5) 00:15:22.980 4.456 - 4.480: 97.0448% ( 1) 00:15:22.980 4.480 - 4.504: 97.0524% ( 1) 00:15:22.980 4.504 - 4.527: 97.0977% ( 6) 00:15:22.980 4.527 - 4.551: 97.1128% ( 2) 00:15:22.980 4.575 - 4.599: 97.1431% ( 4) 00:15:22.980 4.599 - 4.622: 97.1582% ( 2) 00:15:22.980 4.622 - 4.646: 97.2262% ( 9) 00:15:22.980 4.646 - 4.670: 97.2413% ( 2) 00:15:22.980 4.670 - 4.693: 97.2867% ( 6) 00:15:22.980 4.693 - 4.717: 97.3169% ( 4) 00:15:22.980 4.717 - 4.741: 97.3623% ( 6) 00:15:22.980 4.741 - 4.764: 97.4000% ( 5) 00:15:22.980 4.764 - 4.788: 97.4378% ( 5) 00:15:22.980 4.788 - 4.812: 97.4907% ( 7) 00:15:22.980 4.812 - 4.836: 97.5588% ( 9) 00:15:22.980 4.836 - 4.859: 97.6268% ( 9) 00:15:22.980 4.859 - 4.883: 97.6570% ( 4) 00:15:22.980 4.883 - 4.907: 97.6948% ( 5) 00:15:22.980 4.907 - 4.930: 97.7402% ( 6) 00:15:22.980 4.930 - 4.954: 97.7553% ( 2) 00:15:22.980 4.954 - 4.978: 97.7779% ( 3) 00:15:22.980 4.978 - 5.001: 97.8233% ( 6) 00:15:22.980 5.001 - 5.025: 97.8535% ( 4) 00:15:22.980 5.025 - 5.049: 97.8913% ( 5) 00:15:22.980 5.049 - 5.073: 97.9442% ( 7) 00:15:22.980 5.073 - 5.096: 97.9518% ( 1) 00:15:22.980 5.096 - 5.120: 98.0047% ( 7) 00:15:22.980 5.120 - 5.144: 98.0500% ( 6) 00:15:22.980 5.144 - 5.167: 98.0727% ( 3) 00:15:22.980 5.167 - 5.191: 98.0878% ( 2) 00:15:22.980 5.191 - 5.215: 98.1105% ( 3) 00:15:22.980 5.215 - 5.239: 98.1181% ( 1) 00:15:22.980 5.239 - 5.262: 98.1332% ( 2) 00:15:22.980 5.262 - 5.286: 98.1483% ( 2) 00:15:22.980 5.286 - 5.310: 98.1558% ( 1) 00:15:22.980 5.333 - 5.357: 98.1785% ( 3) 00:15:22.980 5.357 - 5.381: 98.1861% ( 1) 00:15:22.980 5.381 - 5.404: 98.2012% ( 2) 00:15:22.980 5.404 - 5.428: 98.2163% ( 2) 00:15:22.980 5.428 - 5.452: 98.2239% ( 1) 00:15:22.980 5.452 - 5.476: 98.2314% ( 1) 00:15:22.980 5.476 - 5.499: 98.2465% ( 2) 00:15:22.980 5.499 - 5.523: 98.2541% ( 1) 00:15:22.980 5.523 - 5.547: 98.2617% ( 1) 00:15:22.980 5.594 - 5.618: 98.2692% ( 1) 00:15:22.980 5.618 - 5.641: 98.2768% ( 1) 00:15:22.980 5.641 - 5.665: 98.2919% ( 2) 00:15:22.980 5.689 - 5.713: 98.2994% ( 1) 00:15:22.980 5.713 - 5.736: 98.3070% ( 1) 00:15:22.980 5.736 - 5.760: 98.3146% ( 1) 00:15:22.980 5.784 - 5.807: 98.3372% ( 3) 00:15:22.980 5.855 - 5.879: 98.3448% ( 1) 00:15:22.980 5.902 - 5.926: 98.3524% ( 1) 00:15:22.980 5.973 - 5.997: 98.3675% ( 2) 00:15:22.980 5.997 - 6.021: 98.3826% ( 2) 00:15:22.980 6.044 - 6.068: 98.3901% ( 1) 00:15:22.980 6.068 - 6.116: 98.3977% ( 1) 00:15:22.980 6.116 - 6.163: 98.4053% ( 1) 00:15:22.980 6.210 - 6.258: 98.4128% ( 1) 00:15:22.980 6.258 - 6.305: 98.4204% ( 1) 00:15:22.980 6.542 - 6.590: 98.4279% ( 1) 00:15:22.980 6.637 - 6.684: 98.4431% ( 2) 00:15:22.980 6.684 - 6.732: 98.4582% ( 2) 00:15:22.980 6.732 - 6.779: 98.4808% ( 3) 00:15:22.980 6.779 - 6.827: 98.4960% ( 2) 00:15:22.980 6.827 - 6.874: 98.5111% ( 2) 00:15:22.980 6.921 - 6.969: 98.5186% ( 1) 00:15:22.980 6.969 - 7.016: 98.5262% ( 1) 00:15:22.980 7.111 - 7.159: 98.5489% ( 3) 00:15:22.980 7.159 - 7.206: 98.5715% ( 3) 00:15:22.980 7.206 - 7.253: 98.5791% ( 1) 00:15:22.980 7.396 - 7.443: 98.5867% ( 1) 00:15:22.980 7.443 - 7.490: 98.5942% ( 1) 00:15:22.980 7.490 - 7.538: 98.6018% ( 1) 00:15:22.980 7.585 - 7.633: 98.6093% ( 1) 00:15:22.980 7.633 - 7.680: 98.6169% ( 1) 00:15:22.980 7.727 - 7.775: 98.6244% ( 1) 00:15:22.980 7.775 - 7.822: 98.6320% ( 1) 00:15:22.980 7.822 - 7.870: 98.6547% ( 3) 00:15:22.980 7.964 - 8.012: 98.6622% ( 1) 00:15:22.980 8.059 - 8.107: 98.6773% ( 2) 00:15:22.980 8.107 - 8.154: 98.7000% ( 3) 00:15:22.980 8.154 - 8.201: 98.7076% ( 1) 00:15:22.980 8.249 - 8.296: 98.7151% ( 1) 00:15:22.980 8.296 - 8.344: 98.7303% ( 2) 00:15:22.980 8.344 - 8.391: 98.7454% ( 2) 00:15:22.980 8.439 - 8.486: 98.7529% ( 1) 00:15:22.980 8.628 - 8.676: 98.7605% ( 1) 00:15:22.980 8.723 - 8.770: 98.7680% ( 1) 00:15:22.980 8.770 - 8.818: 98.7756% ( 1) 00:15:22.980 8.818 - 8.865: 98.7907% ( 2) 00:15:22.980 8.865 - 8.913: 98.7983% ( 1) 00:15:22.980 9.244 - 9.292: 98.8210% ( 3) 00:15:22.980 9.434 - 9.481: 98.8361% ( 2) 00:15:22.980 9.481 - 9.529: 98.8436% ( 1) 00:15:22.980 9.719 - 9.766: 98.8512% ( 1) 00:15:22.980 10.287 - 10.335: 98.8587% ( 1) 00:15:22.980 10.572 - 10.619: 98.8663% ( 1) 00:15:22.980 11.046 - 11.093: 98.8739% ( 1) 00:15:22.980 11.093 - 11.141: 98.8814% ( 1) 00:15:22.980 11.473 - 11.520: 98.8890% ( 1) 00:15:22.980 11.567 - 11.615: 98.9041% ( 2) 00:15:22.980 11.852 - 11.899: 98.9116% ( 1) 00:15:22.980 12.136 - 12.231: 98.9192% ( 1) 00:15:22.980 12.516 - 12.610: 98.9268% ( 1) 00:15:22.980 12.610 - 12.705: 98.9646% ( 5) 00:15:22.980 12.990 - 13.084: 98.9721% ( 1) 00:15:22.980 13.179 - 13.274: 98.9797% ( 1) 00:15:22.980 13.748 - 13.843: 98.9948% ( 2) 00:15:22.980 13.938 - 14.033: 99.0099% ( 2) 00:15:22.980 14.033 - 14.127: 99.0175% ( 1) 00:15:22.980 14.981 - 15.076: 99.0250% ( 1) 00:15:22.980 15.076 - 15.170: 99.0326% ( 1) 00:15:22.980 16.687 - 16.782: 99.0401% ( 1) 00:15:22.980 17.067 - 17.161: 99.0477% ( 1) 00:15:22.980 17.161 - 17.256: 99.0628% ( 2) 00:15:22.980 17.256 - 17.351: 99.0779% ( 2) 00:15:22.980 17.351 - 17.446: 99.1006% ( 3) 00:15:22.980 17.446 - 17.541: 99.1384% ( 5) 00:15:22.980 17.541 - 17.636: 99.1535% ( 2) 00:15:22.980 17.636 - 17.730: 99.2064% ( 7) 00:15:22.980 17.730 - 17.825: 99.2442% ( 5) 00:15:22.980 17.825 - 17.920: 99.3047% ( 8) 00:15:22.980 17.920 - 18.015: 99.3500% ( 6) 00:15:22.980 18.015 - 18.110: 99.3727% ( 3) 00:15:22.980 18.110 - 18.204: 99.4105% ( 5) 00:15:22.980 18.204 - 18.299: 99.4936% ( 11) 00:15:22.980 18.299 - 18.394: 99.5692% ( 10) 00:15:22.981 18.394 - 18.489: 99.6523% ( 11) 00:15:22.981 18.489 - 18.584: 99.6674% ( 2) 00:15:22.981 18.584 - 18.679: 99.6901% ( 3) 00:15:22.981 18.679 - 18.773: 99.7204% ( 4) 00:15:22.981 18.773 - 18.868: 99.7581% ( 5) 00:15:22.981 18.868 - 18.963: 99.7884% ( 4) 00:15:22.981 18.963 - 19.058: 99.8035% ( 2) 00:15:22.981 19.058 - 19.153: 99.8186% ( 2) 00:15:22.981 19.153 - 19.247: 99.8337% ( 2) 00:15:22.981 19.342 - 19.437: 99.8413% ( 1) 00:15:22.981 19.437 - 19.532: 99.8488% ( 1) 00:15:22.981 19.721 - 19.816: 99.8640% ( 2) 00:15:22.981 20.101 - 20.196: 99.8715% ( 1) 00:15:22.981 20.290 - 20.385: 99.8791% ( 1) 00:15:22.981 20.575 - 20.670: 99.8866% ( 1) 00:15:22.981 21.902 - 21.997: 99.9017% ( 2) 00:15:22.981 36.978 - 37.167: 99.9093% ( 1) 00:15:22.981 3980.705 - 4004.978: 99.9547% ( 6) 00:15:22.981 4004.978 - 4029.250: 100.0000% ( 6) 00:15:22.981 00:15:22.981 Complete histogram 00:15:22.981 ================== 00:15:22.981 Range in us Cumulative Count 00:15:22.981 2.039 - 2.050: 5.2377% ( 693) 00:15:22.981 2.050 - 2.062: 41.4103% ( 4786) 00:15:22.981 2.062 - 2.074: 49.9206% ( 1126) 00:15:22.981 2.074 - 2.086: 54.7804% ( 643) 00:15:22.981 2.086 - 2.098: 61.2501% ( 856) 00:15:22.981 2.098 - 2.110: 63.0640% ( 240) 00:15:22.981 2.110 - 2.121: 68.8157% ( 761) 00:15:22.981 2.121 - 2.133: 76.1318% ( 968) 00:15:22.981 2.133 - 2.145: 77.2504% ( 148) 00:15:22.981 2.145 - 2.157: 79.6841% ( 322) 00:15:22.981 2.157 - 2.169: 81.8835% ( 291) 00:15:22.981 2.169 - 2.181: 82.5410% ( 87) 00:15:22.981 2.181 - 2.193: 84.8160% ( 301) 00:15:22.981 2.193 - 2.204: 88.1868% ( 446) 00:15:22.981 2.204 - 2.216: 90.2199% ( 269) 00:15:22.981 2.216 - 2.228: 92.1775% ( 259) 00:15:22.981 2.228 - 2.240: 93.2658% ( 144) 00:15:22.981 2.240 - 2.252: 93.6513% ( 51) 00:15:22.981 2.252 - 2.264: 93.9838% ( 44) 00:15:22.981 2.264 - 2.276: 94.1425% ( 21) 00:15:22.981 2.276 - 2.287: 94.6489% ( 67) 00:15:22.981 2.287 - 2.299: 95.0268% ( 50) 00:15:22.981 2.299 - 2.311: 95.2082% ( 24) 00:15:22.981 2.311 - 2.323: 95.3821% ( 23) 00:15:22.981 2.323 - 2.335: 95.5483% ( 22) 00:15:22.981 2.335 - 2.347: 95.8129% ( 35) 00:15:22.981 2.347 - 2.359: 96.2815% ( 62) 00:15:22.981 2.359 - 2.370: 96.8030% ( 69) 00:15:22.981 2.370 - 2.382: 97.0826% ( 37) 00:15:22.981 2.382 - 2.394: 97.2640% ( 24) 00:15:22.981 2.394 - 2.406: 97.4000% ( 18) 00:15:22.981 2.406 - 2.418: 97.5210% ( 16) 00:15:22.981 2.418 - 2.430: 97.6041% ( 11) 00:15:22.981 2.430 - 2.441: 97.7402% ( 18) 00:15:22.981 2.441 - 2.453: 97.8006% ( 8) 00:15:22.981 2.453 - 2.465: 97.8460% ( 6) 00:15:22.981 2.465 - 2.477: 97.8838% ( 5) 00:15:22.981 2.477 - 2.489: 97.9215% ( 5) 00:15:22.981 2.489 - 2.501: 97.9442% ( 3) 00:15:22.981 2.501 - 2.513: 97.9593% ( 2) 00:15:22.981 2.513 - 2.524: 97.9820% ( 3) 00:15:22.981 2.524 - 2.536: 97.9896% ( 1) 00:15:22.981 2.536 - 2.548: 98.0047% ( 2) 00:15:22.981 2.548 - 2.560: 98.0198% ( 2) 00:15:22.981 2.560 - 2.572: 98.0349% ( 2) 00:15:22.981 2.584 - 2.596: 98.0425% ( 1) 00:15:22.981 2.607 - 2.619: 98.0500% ( 1) 00:15:22.981 2.655 - 2.667: 98.0576% ( 1) 00:15:22.981 2.667 - 2.679: 98.0652% ( 1) 00:15:22.981 2.679 - 2.690: 98.0803% ( 2) 00:15:22.981 2.690 - 2.702: 98.1105% ( 4) 00:15:22.981 2.714 - 2.726: 98.1256% ( 2) 00:15:22.981 2.738 - 2.750: 98.1332% ( 1) 00:15:22.981 2.773 - 2.785: 98.1407% ( 1) 00:15:22.981 2.844 - 2.856: 98.1483% ( 1) 00:15:22.981 2.951 - 2.963: 98.1634% ( 2) 00:15:22.981 2.963 - 2.975: 98.1785% ( 2) 00:15:22.981 2.975 - 2.987: 98.1861% ( 1) 00:15:22.981 3.034 - 3.058: 98.2012% ( 2) 00:15:22.981 3.058 - 3.081: 98.2163% ( 2) 00:15:22.981 3.081 - 3.105: 98.2314% ( 2) 00:15:22.981 3.105 - 3.129: 98.2617% ( 4) 00:15:22.981 3.153 - 3.176: 98.2843% ( 3) 00:15:22.981 3.176 - 3.200: 9[2024-07-13 08:02:14.350621] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:22.981 8.3146% ( 4) 00:15:22.981 3.200 - 3.224: 98.3221% ( 1) 00:15:22.981 3.224 - 3.247: 98.3448% ( 3) 00:15:22.981 3.271 - 3.295: 98.3675% ( 3) 00:15:22.981 3.319 - 3.342: 98.3750% ( 1) 00:15:22.981 3.342 - 3.366: 98.3826% ( 1) 00:15:22.981 3.366 - 3.390: 98.4053% ( 3) 00:15:22.981 3.390 - 3.413: 98.4279% ( 3) 00:15:22.981 3.413 - 3.437: 98.4431% ( 2) 00:15:22.981 3.437 - 3.461: 98.4657% ( 3) 00:15:22.981 3.484 - 3.508: 98.4884% ( 3) 00:15:22.981 3.508 - 3.532: 98.4960% ( 1) 00:15:22.981 3.532 - 3.556: 98.5186% ( 3) 00:15:22.981 3.556 - 3.579: 98.5413% ( 3) 00:15:22.981 3.579 - 3.603: 98.5564% ( 2) 00:15:22.981 3.603 - 3.627: 98.5640% ( 1) 00:15:22.981 3.627 - 3.650: 98.5715% ( 1) 00:15:22.981 3.698 - 3.721: 98.5791% ( 1) 00:15:22.981 3.721 - 3.745: 98.5867% ( 1) 00:15:22.981 3.793 - 3.816: 98.5942% ( 1) 00:15:22.981 3.840 - 3.864: 98.6093% ( 2) 00:15:22.981 3.864 - 3.887: 98.6169% ( 1) 00:15:22.981 3.887 - 3.911: 98.6320% ( 2) 00:15:22.981 3.959 - 3.982: 98.6396% ( 1) 00:15:22.981 4.030 - 4.053: 98.6471% ( 1) 00:15:22.981 4.053 - 4.077: 98.6622% ( 2) 00:15:22.981 4.124 - 4.148: 98.6698% ( 1) 00:15:22.981 4.148 - 4.172: 98.6773% ( 1) 00:15:22.981 4.196 - 4.219: 98.6849% ( 1) 00:15:22.981 5.191 - 5.215: 98.6925% ( 1) 00:15:22.981 5.428 - 5.452: 98.7076% ( 2) 00:15:22.981 5.499 - 5.523: 98.7151% ( 1) 00:15:22.981 5.547 - 5.570: 98.7227% ( 1) 00:15:22.981 5.618 - 5.641: 98.7303% ( 1) 00:15:22.981 5.879 - 5.902: 98.7454% ( 2) 00:15:22.981 5.950 - 5.973: 98.7529% ( 1) 00:15:22.981 6.068 - 6.116: 98.7605% ( 1) 00:15:22.981 6.116 - 6.163: 98.7756% ( 2) 00:15:22.981 6.210 - 6.258: 98.7832% ( 1) 00:15:22.981 6.353 - 6.400: 98.7907% ( 1) 00:15:22.981 6.400 - 6.447: 98.7983% ( 1) 00:15:22.981 6.590 - 6.637: 98.8058% ( 1) 00:15:22.981 6.637 - 6.684: 98.8134% ( 1) 00:15:22.981 6.827 - 6.874: 98.8210% ( 1) 00:15:22.981 7.064 - 7.111: 98.8285% ( 1) 00:15:22.981 7.490 - 7.538: 98.8361% ( 1) 00:15:22.981 11.046 - 11.093: 98.8436% ( 1) 00:15:22.981 12.610 - 12.705: 98.8512% ( 1) 00:15:22.981 15.550 - 15.644: 98.8587% ( 1) 00:15:22.981 15.644 - 15.739: 98.8663% ( 1) 00:15:22.981 15.929 - 16.024: 98.9116% ( 6) 00:15:22.981 16.024 - 16.119: 98.9343% ( 3) 00:15:22.981 16.119 - 16.213: 98.9797% ( 6) 00:15:22.981 16.213 - 16.308: 98.9872% ( 1) 00:15:22.981 16.308 - 16.403: 99.0175% ( 4) 00:15:22.981 16.403 - 16.498: 99.0779% ( 8) 00:15:22.981 16.498 - 16.593: 99.1459% ( 9) 00:15:22.981 16.593 - 16.687: 99.1837% ( 5) 00:15:22.981 16.687 - 16.782: 99.2064% ( 3) 00:15:22.981 16.782 - 16.877: 99.2140% ( 1) 00:15:22.981 16.877 - 16.972: 99.2291% ( 2) 00:15:22.981 16.972 - 17.067: 99.2442% ( 2) 00:15:22.981 17.067 - 17.161: 99.2593% ( 2) 00:15:22.981 17.161 - 17.256: 99.2669% ( 1) 00:15:22.981 17.446 - 17.541: 99.2895% ( 3) 00:15:22.981 17.541 - 17.636: 99.3047% ( 2) 00:15:22.981 17.636 - 17.730: 99.3198% ( 2) 00:15:22.981 17.730 - 17.825: 99.3425% ( 3) 00:15:22.981 17.825 - 17.920: 99.3651% ( 3) 00:15:22.981 18.015 - 18.110: 99.3802% ( 2) 00:15:22.981 18.204 - 18.299: 99.3878% ( 1) 00:15:22.981 18.394 - 18.489: 99.4029% ( 2) 00:15:22.981 18.489 - 18.584: 99.4105% ( 1) 00:15:22.981 18.679 - 18.773: 99.4180% ( 1) 00:15:22.981 18.868 - 18.963: 99.4256% ( 1) 00:15:22.981 28.065 - 28.255: 99.4331% ( 1) 00:15:22.981 3980.705 - 4004.978: 99.7808% ( 46) 00:15:22.981 4004.978 - 4029.250: 99.9924% ( 28) 00:15:22.981 5971.058 - 5995.330: 100.0000% ( 1) 00:15:22.981 00:15:22.981 08:02:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:22.981 08:02:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:22.981 08:02:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:22.981 08:02:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:22.981 08:02:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:22.981 [ 00:15:22.982 { 00:15:22.982 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:22.982 "subtype": "Discovery", 00:15:22.982 "listen_addresses": [], 00:15:22.982 "allow_any_host": true, 00:15:22.982 "hosts": [] 00:15:22.982 }, 00:15:22.982 { 00:15:22.982 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:22.982 "subtype": "NVMe", 00:15:22.982 "listen_addresses": [ 00:15:22.982 { 00:15:22.982 "trtype": "VFIOUSER", 00:15:22.982 "adrfam": "IPv4", 00:15:22.982 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:22.982 "trsvcid": "0" 00:15:22.982 } 00:15:22.982 ], 00:15:22.982 "allow_any_host": true, 00:15:22.982 "hosts": [], 00:15:22.982 "serial_number": "SPDK1", 00:15:22.982 "model_number": "SPDK bdev Controller", 00:15:22.982 "max_namespaces": 32, 00:15:22.982 "min_cntlid": 1, 00:15:22.982 "max_cntlid": 65519, 00:15:22.982 "namespaces": [ 00:15:22.982 { 00:15:22.982 "nsid": 1, 00:15:22.982 "bdev_name": "Malloc1", 00:15:22.982 "name": "Malloc1", 00:15:22.982 "nguid": "DD421394E8D94743ADFE86FB1BB410A7", 00:15:22.982 "uuid": "dd421394-e8d9-4743-adfe-86fb1bb410a7" 00:15:22.982 }, 00:15:22.982 { 00:15:22.982 "nsid": 2, 00:15:22.982 "bdev_name": "Malloc3", 00:15:22.982 "name": "Malloc3", 00:15:22.982 "nguid": "552B3CA6A325452BB48522DBCAA922E3", 00:15:22.982 "uuid": "552b3ca6-a325-452b-b485-22dbcaa922e3" 00:15:22.982 } 00:15:22.982 ] 00:15:22.982 }, 00:15:22.982 { 00:15:22.982 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:22.982 "subtype": "NVMe", 00:15:22.982 "listen_addresses": [ 00:15:22.982 { 00:15:22.982 "trtype": "VFIOUSER", 00:15:22.982 "adrfam": "IPv4", 00:15:22.982 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:22.982 "trsvcid": "0" 00:15:22.982 } 00:15:22.982 ], 00:15:22.982 "allow_any_host": true, 00:15:22.982 "hosts": [], 00:15:22.982 "serial_number": "SPDK2", 00:15:22.982 "model_number": "SPDK bdev Controller", 00:15:22.982 "max_namespaces": 32, 00:15:22.982 "min_cntlid": 1, 00:15:22.982 "max_cntlid": 65519, 00:15:22.982 "namespaces": [ 00:15:22.982 { 00:15:22.982 "nsid": 1, 00:15:22.982 "bdev_name": "Malloc2", 00:15:22.982 "name": "Malloc2", 00:15:22.982 "nguid": "D76EF58F8C434FE691D1390BB7D2AB1B", 00:15:22.982 "uuid": "d76ef58f-8c43-4fe6-91d1-390bb7d2ab1b" 00:15:22.982 } 00:15:22.982 ] 00:15:22.982 } 00:15:22.982 ] 00:15:22.982 08:02:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:22.982 08:02:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1922763 00:15:22.982 08:02:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:22.982 08:02:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:22.982 08:02:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:22.982 08:02:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:22.982 08:02:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:22.982 08:02:14 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:22.982 08:02:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:22.982 08:02:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:22.982 EAL: No free 2048 kB hugepages reported on node 1 00:15:23.239 [2024-07-13 08:02:14.810324] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:23.239 Malloc4 00:15:23.239 08:02:14 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:23.497 [2024-07-13 08:02:15.167986] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:23.497 08:02:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:23.497 Asynchronous Event Request test 00:15:23.497 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:23.497 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:23.497 Registering asynchronous event callbacks... 00:15:23.497 Starting namespace attribute notice tests for all controllers... 00:15:23.497 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:23.497 aer_cb - Changed Namespace 00:15:23.497 Cleaning up... 00:15:23.754 [ 00:15:23.754 { 00:15:23.754 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:23.754 "subtype": "Discovery", 00:15:23.754 "listen_addresses": [], 00:15:23.754 "allow_any_host": true, 00:15:23.754 "hosts": [] 00:15:23.754 }, 00:15:23.754 { 00:15:23.754 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:23.754 "subtype": "NVMe", 00:15:23.754 "listen_addresses": [ 00:15:23.754 { 00:15:23.754 "trtype": "VFIOUSER", 00:15:23.754 "adrfam": "IPv4", 00:15:23.754 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:23.754 "trsvcid": "0" 00:15:23.754 } 00:15:23.754 ], 00:15:23.754 "allow_any_host": true, 00:15:23.754 "hosts": [], 00:15:23.754 "serial_number": "SPDK1", 00:15:23.754 "model_number": "SPDK bdev Controller", 00:15:23.754 "max_namespaces": 32, 00:15:23.754 "min_cntlid": 1, 00:15:23.754 "max_cntlid": 65519, 00:15:23.754 "namespaces": [ 00:15:23.754 { 00:15:23.754 "nsid": 1, 00:15:23.754 "bdev_name": "Malloc1", 00:15:23.754 "name": "Malloc1", 00:15:23.754 "nguid": "DD421394E8D94743ADFE86FB1BB410A7", 00:15:23.754 "uuid": "dd421394-e8d9-4743-adfe-86fb1bb410a7" 00:15:23.754 }, 00:15:23.754 { 00:15:23.754 "nsid": 2, 00:15:23.754 "bdev_name": "Malloc3", 00:15:23.754 "name": "Malloc3", 00:15:23.754 "nguid": "552B3CA6A325452BB48522DBCAA922E3", 00:15:23.754 "uuid": "552b3ca6-a325-452b-b485-22dbcaa922e3" 00:15:23.754 } 00:15:23.754 ] 00:15:23.754 }, 00:15:23.754 { 00:15:23.754 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:23.754 "subtype": "NVMe", 00:15:23.754 "listen_addresses": [ 00:15:23.754 { 00:15:23.754 "trtype": "VFIOUSER", 00:15:23.754 "adrfam": "IPv4", 00:15:23.754 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:23.754 "trsvcid": "0" 00:15:23.754 } 00:15:23.754 ], 00:15:23.754 "allow_any_host": true, 00:15:23.754 "hosts": [], 00:15:23.754 "serial_number": "SPDK2", 00:15:23.754 "model_number": "SPDK bdev Controller", 00:15:23.754 "max_namespaces": 32, 00:15:23.754 "min_cntlid": 1, 00:15:23.754 "max_cntlid": 65519, 00:15:23.754 "namespaces": [ 00:15:23.754 { 00:15:23.754 "nsid": 1, 00:15:23.754 "bdev_name": "Malloc2", 00:15:23.754 "name": "Malloc2", 00:15:23.754 "nguid": "D76EF58F8C434FE691D1390BB7D2AB1B", 00:15:23.754 "uuid": "d76ef58f-8c43-4fe6-91d1-390bb7d2ab1b" 00:15:23.754 }, 00:15:23.754 { 00:15:23.754 "nsid": 2, 00:15:23.754 "bdev_name": "Malloc4", 00:15:23.754 "name": "Malloc4", 00:15:23.754 "nguid": "76CF563FC2544217BB0B6DF1383E08C8", 00:15:23.754 "uuid": "76cf563f-c254-4217-bb0b-6df1383e08c8" 00:15:23.754 } 00:15:23.754 ] 00:15:23.754 } 00:15:23.754 ] 00:15:23.754 08:02:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1922763 00:15:23.754 08:02:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:23.754 08:02:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1917161 00:15:23.754 08:02:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1917161 ']' 00:15:23.754 08:02:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1917161 00:15:23.754 08:02:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:23.754 08:02:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:23.754 08:02:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1917161 00:15:23.754 08:02:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:23.754 08:02:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:23.754 08:02:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1917161' 00:15:23.754 killing process with pid 1917161 00:15:23.754 08:02:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1917161 00:15:23.754 08:02:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1917161 00:15:24.318 08:02:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:24.318 08:02:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:24.318 08:02:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:24.318 08:02:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:24.318 08:02:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:24.319 08:02:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1922906 00:15:24.319 08:02:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:24.319 08:02:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1922906' 00:15:24.319 Process pid: 1922906 00:15:24.319 08:02:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:24.319 08:02:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1922906 00:15:24.319 08:02:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1922906 ']' 00:15:24.319 08:02:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:24.319 08:02:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:24.319 08:02:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:24.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:24.319 08:02:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:24.319 08:02:15 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:24.319 [2024-07-13 08:02:15.817056] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:24.319 [2024-07-13 08:02:15.818079] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:24.319 [2024-07-13 08:02:15.818156] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:24.319 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.319 [2024-07-13 08:02:15.875884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:24.319 [2024-07-13 08:02:15.964476] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:24.319 [2024-07-13 08:02:15.964543] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:24.319 [2024-07-13 08:02:15.964563] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:24.319 [2024-07-13 08:02:15.964574] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:24.319 [2024-07-13 08:02:15.964584] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:24.319 [2024-07-13 08:02:15.964668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:24.319 [2024-07-13 08:02:15.964733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:24.319 [2024-07-13 08:02:15.964802] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.319 [2024-07-13 08:02:15.964800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:24.575 [2024-07-13 08:02:16.062932] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:24.575 [2024-07-13 08:02:16.063125] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:24.575 [2024-07-13 08:02:16.063372] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:24.575 [2024-07-13 08:02:16.064022] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:24.575 [2024-07-13 08:02:16.064268] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:24.575 08:02:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:24.575 08:02:16 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:24.575 08:02:16 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:25.506 08:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:25.763 08:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:25.763 08:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:25.763 08:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:25.763 08:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:25.763 08:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:26.020 Malloc1 00:15:26.020 08:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:26.278 08:02:17 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:26.535 08:02:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:27.109 08:02:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:27.109 08:02:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:27.109 08:02:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:27.109 Malloc2 00:15:27.109 08:02:18 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:27.372 08:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:27.629 08:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:27.886 08:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:27.886 08:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1922906 00:15:27.886 08:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1922906 ']' 00:15:27.886 08:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1922906 00:15:27.886 08:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:27.886 08:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:27.886 08:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1922906 00:15:27.886 08:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:27.886 08:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:27.886 08:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1922906' 00:15:27.886 killing process with pid 1922906 00:15:27.886 08:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1922906 00:15:27.886 08:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1922906 00:15:28.143 08:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:28.143 08:02:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:28.143 00:15:28.143 real 0m53.054s 00:15:28.143 user 3m29.218s 00:15:28.143 sys 0m4.379s 00:15:28.143 08:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:28.143 08:02:19 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:28.143 ************************************ 00:15:28.143 END TEST nvmf_vfio_user 00:15:28.143 ************************************ 00:15:28.402 08:02:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:28.402 08:02:19 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:28.402 08:02:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:28.402 08:02:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:28.402 08:02:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:28.402 ************************************ 00:15:28.402 START TEST nvmf_vfio_user_nvme_compliance 00:15:28.402 ************************************ 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:28.402 * Looking for test storage... 00:15:28.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:28.402 08:02:19 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:28.402 08:02:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1923497 00:15:28.402 08:02:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:28.402 08:02:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1923497' 00:15:28.402 Process pid: 1923497 00:15:28.402 08:02:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:28.402 08:02:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1923497 00:15:28.402 08:02:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1923497 ']' 00:15:28.402 08:02:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.402 08:02:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:28.402 08:02:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.402 08:02:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:28.402 08:02:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:28.402 [2024-07-13 08:02:20.048305] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:15:28.402 [2024-07-13 08:02:20.048408] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:28.402 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.402 [2024-07-13 08:02:20.125076] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:28.660 [2024-07-13 08:02:20.225194] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:28.660 [2024-07-13 08:02:20.225257] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:28.660 [2024-07-13 08:02:20.225285] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:28.660 [2024-07-13 08:02:20.225308] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:28.660 [2024-07-13 08:02:20.225327] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:28.660 [2024-07-13 08:02:20.225498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:28.660 [2024-07-13 08:02:20.225573] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:28.660 [2024-07-13 08:02:20.225564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:28.660 08:02:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:28.660 08:02:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:15:28.660 08:02:20 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:30.029 malloc0 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:30.029 08:02:21 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:30.029 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.029 00:15:30.029 00:15:30.029 CUnit - A unit testing framework for C - Version 2.1-3 00:15:30.029 http://cunit.sourceforge.net/ 00:15:30.029 00:15:30.029 00:15:30.029 Suite: nvme_compliance 00:15:30.029 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-13 08:02:21.585393] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.029 [2024-07-13 08:02:21.586834] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:30.029 [2024-07-13 08:02:21.586893] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:30.029 [2024-07-13 08:02:21.586908] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:30.029 [2024-07-13 08:02:21.588411] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.029 passed 00:15:30.029 Test: admin_identify_ctrlr_verify_fused ...[2024-07-13 08:02:21.675997] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.029 [2024-07-13 08:02:21.679016] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.029 passed 00:15:30.285 Test: admin_identify_ns ...[2024-07-13 08:02:21.764420] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.286 [2024-07-13 08:02:21.823887] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:30.286 [2024-07-13 08:02:21.831882] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:30.286 [2024-07-13 08:02:21.853004] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.286 passed 00:15:30.286 Test: admin_get_features_mandatory_features ...[2024-07-13 08:02:21.936623] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.286 [2024-07-13 08:02:21.939642] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.286 passed 00:15:30.542 Test: admin_get_features_optional_features ...[2024-07-13 08:02:22.021168] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.542 [2024-07-13 08:02:22.024183] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.542 passed 00:15:30.542 Test: admin_set_features_number_of_queues ...[2024-07-13 08:02:22.108420] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.542 [2024-07-13 08:02:22.212971] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.542 passed 00:15:30.798 Test: admin_get_log_page_mandatory_logs ...[2024-07-13 08:02:22.295139] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.798 [2024-07-13 08:02:22.300189] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.798 passed 00:15:30.798 Test: admin_get_log_page_with_lpo ...[2024-07-13 08:02:22.381314] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.798 [2024-07-13 08:02:22.452896] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:30.798 [2024-07-13 08:02:22.465964] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.798 passed 00:15:31.054 Test: fabric_property_get ...[2024-07-13 08:02:22.546561] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.054 [2024-07-13 08:02:22.547843] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:31.054 [2024-07-13 08:02:22.549586] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.054 passed 00:15:31.054 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-13 08:02:22.635133] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.054 [2024-07-13 08:02:22.636417] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:31.054 [2024-07-13 08:02:22.638168] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.054 passed 00:15:31.054 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-13 08:02:22.721437] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.310 [2024-07-13 08:02:22.804904] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:31.310 [2024-07-13 08:02:22.820879] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:31.310 [2024-07-13 08:02:22.825993] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.310 passed 00:15:31.310 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-13 08:02:22.912173] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.310 [2024-07-13 08:02:22.913449] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:31.310 [2024-07-13 08:02:22.915214] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.310 passed 00:15:31.310 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-13 08:02:22.997549] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.566 [2024-07-13 08:02:23.072891] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:31.566 [2024-07-13 08:02:23.095876] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:31.566 [2024-07-13 08:02:23.101005] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.566 passed 00:15:31.566 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-13 08:02:23.184160] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.566 [2024-07-13 08:02:23.185442] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:31.566 [2024-07-13 08:02:23.185489] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:31.566 [2024-07-13 08:02:23.187172] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.566 passed 00:15:31.566 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-13 08:02:23.271372] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.821 [2024-07-13 08:02:23.362904] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:31.821 [2024-07-13 08:02:23.370891] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:31.821 [2024-07-13 08:02:23.378874] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:31.821 [2024-07-13 08:02:23.386877] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:31.821 [2024-07-13 08:02:23.415984] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:31.821 passed 00:15:31.821 Test: admin_create_io_sq_verify_pc ...[2024-07-13 08:02:23.500599] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:31.821 [2024-07-13 08:02:23.523890] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:31.821 [2024-07-13 08:02:23.540977] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:32.078 passed 00:15:32.078 Test: admin_create_io_qp_max_qps ...[2024-07-13 08:02:23.626520] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.009 [2024-07-13 08:02:24.714896] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:33.600 [2024-07-13 08:02:25.113043] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.600 passed 00:15:33.600 Test: admin_create_io_sq_shared_cq ...[2024-07-13 08:02:25.196464] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:33.600 [2024-07-13 08:02:25.325873] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:33.857 [2024-07-13 08:02:25.362965] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:33.857 passed 00:15:33.857 00:15:33.857 Run Summary: Type Total Ran Passed Failed Inactive 00:15:33.857 suites 1 1 n/a 0 0 00:15:33.857 tests 18 18 18 0 0 00:15:33.857 asserts 360 360 360 0 n/a 00:15:33.857 00:15:33.857 Elapsed time = 1.567 seconds 00:15:33.857 08:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1923497 00:15:33.857 08:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1923497 ']' 00:15:33.857 08:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1923497 00:15:33.857 08:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:15:33.857 08:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:33.857 08:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1923497 00:15:33.857 08:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:33.857 08:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:33.857 08:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1923497' 00:15:33.857 killing process with pid 1923497 00:15:33.857 08:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1923497 00:15:33.857 08:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1923497 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:34.115 00:15:34.115 real 0m5.783s 00:15:34.115 user 0m16.248s 00:15:34.115 sys 0m0.539s 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:34.115 ************************************ 00:15:34.115 END TEST nvmf_vfio_user_nvme_compliance 00:15:34.115 ************************************ 00:15:34.115 08:02:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:34.115 08:02:25 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:34.115 08:02:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:34.115 08:02:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:34.115 08:02:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:34.115 ************************************ 00:15:34.115 START TEST nvmf_vfio_user_fuzz 00:15:34.115 ************************************ 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:34.115 * Looking for test storage... 00:15:34.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1924218 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1924218' 00:15:34.115 Process pid: 1924218 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1924218 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1924218 ']' 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:34.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:34.115 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:34.116 08:02:25 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:34.680 08:02:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:34.680 08:02:26 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:15:34.680 08:02:26 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.612 malloc0 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:35.612 08:02:27 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:07.668 Fuzzing completed. Shutting down the fuzz application 00:16:07.668 00:16:07.668 Dumping successful admin opcodes: 00:16:07.668 8, 9, 10, 24, 00:16:07.668 Dumping successful io opcodes: 00:16:07.668 0, 00:16:07.668 NS: 0x200003a1ef00 I/O qp, Total commands completed: 582980, total successful commands: 2242, random_seed: 1759464256 00:16:07.668 NS: 0x200003a1ef00 admin qp, Total commands completed: 75086, total successful commands: 587, random_seed: 110182144 00:16:07.668 08:02:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:07.668 08:02:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:07.668 08:02:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:07.668 08:02:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:07.668 08:02:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1924218 00:16:07.668 08:02:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1924218 ']' 00:16:07.668 08:02:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1924218 00:16:07.668 08:02:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:16:07.668 08:02:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:07.668 08:02:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1924218 00:16:07.668 08:02:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:07.668 08:02:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:07.668 08:02:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1924218' 00:16:07.668 killing process with pid 1924218 00:16:07.668 08:02:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1924218 00:16:07.668 08:02:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1924218 00:16:07.668 08:02:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:07.668 08:02:57 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:07.668 00:16:07.668 real 0m32.219s 00:16:07.668 user 0m31.158s 00:16:07.668 sys 0m28.956s 00:16:07.668 08:02:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:07.668 08:02:57 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:07.668 ************************************ 00:16:07.668 END TEST nvmf_vfio_user_fuzz 00:16:07.668 ************************************ 00:16:07.668 08:02:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:07.668 08:02:58 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:07.668 08:02:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:07.668 08:02:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:07.668 08:02:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:07.668 ************************************ 00:16:07.668 START TEST nvmf_host_management 00:16:07.668 ************************************ 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:07.668 * Looking for test storage... 00:16:07.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:07.668 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.669 08:02:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.669 08:02:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.669 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:07.669 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:07.669 08:02:58 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:07.669 08:02:58 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:08.604 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:08.605 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:08.605 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:08.605 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:08.605 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:08.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:08.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:16:08.605 00:16:08.605 --- 10.0.0.2 ping statistics --- 00:16:08.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.605 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:08.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:08.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.110 ms 00:16:08.605 00:16:08.605 --- 10.0.0.1 ping statistics --- 00:16:08.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.605 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1929690 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1929690 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1929690 ']' 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.605 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:08.863 [2024-07-13 08:03:00.353348] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:08.863 [2024-07-13 08:03:00.353442] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.863 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.863 [2024-07-13 08:03:00.424111] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:08.863 [2024-07-13 08:03:00.522123] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.863 [2024-07-13 08:03:00.522192] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.863 [2024-07-13 08:03:00.522209] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.863 [2024-07-13 08:03:00.522223] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.863 [2024-07-13 08:03:00.522234] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.863 [2024-07-13 08:03:00.522323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.863 [2024-07-13 08:03:00.522351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:08.863 [2024-07-13 08:03:00.522403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:08.863 [2024-07-13 08:03:00.522406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.120 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.120 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:09.120 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:09.120 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:09.120 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:09.120 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.120 08:03:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:09.120 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:09.121 [2024-07-13 08:03:00.663515] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:09.121 Malloc0 00:16:09.121 [2024-07-13 08:03:00.723353] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1929871 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1929871 /var/tmp/bdevperf.sock 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1929871 ']' 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:09.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:09.121 { 00:16:09.121 "params": { 00:16:09.121 "name": "Nvme$subsystem", 00:16:09.121 "trtype": "$TEST_TRANSPORT", 00:16:09.121 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:09.121 "adrfam": "ipv4", 00:16:09.121 "trsvcid": "$NVMF_PORT", 00:16:09.121 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:09.121 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:09.121 "hdgst": ${hdgst:-false}, 00:16:09.121 "ddgst": ${ddgst:-false} 00:16:09.121 }, 00:16:09.121 "method": "bdev_nvme_attach_controller" 00:16:09.121 } 00:16:09.121 EOF 00:16:09.121 )") 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:09.121 08:03:00 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:09.121 "params": { 00:16:09.121 "name": "Nvme0", 00:16:09.121 "trtype": "tcp", 00:16:09.121 "traddr": "10.0.0.2", 00:16:09.121 "adrfam": "ipv4", 00:16:09.121 "trsvcid": "4420", 00:16:09.121 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:09.121 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:09.121 "hdgst": false, 00:16:09.121 "ddgst": false 00:16:09.121 }, 00:16:09.121 "method": "bdev_nvme_attach_controller" 00:16:09.121 }' 00:16:09.121 [2024-07-13 08:03:00.798948] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:09.121 [2024-07-13 08:03:00.799031] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1929871 ] 00:16:09.121 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.378 [2024-07-13 08:03:00.862744] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.379 [2024-07-13 08:03:00.950063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.636 Running I/O for 10 seconds... 00:16:09.636 08:03:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.894 08:03:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:09.894 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:09.894 08:03:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.894 08:03:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:09.894 08:03:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.894 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:09.894 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:09.894 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:09.894 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:09.894 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:09.894 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:09.894 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:09.894 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:09.894 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:09.894 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:09.894 08:03:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.894 08:03:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:09.894 08:03:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.894 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:16:09.894 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:16:09.894 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:10.153 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:10.153 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:10.153 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:10.153 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:10.153 08:03:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.153 08:03:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:10.153 08:03:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.153 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:16:10.153 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:16:10.153 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:10.153 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:10.153 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:10.153 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:10.153 08:03:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.153 08:03:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:10.153 [2024-07-13 08:03:01.729317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.153 [2024-07-13 08:03:01.729386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.153 [2024-07-13 08:03:01.729429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.153 [2024-07-13 08:03:01.729447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.153 [2024-07-13 08:03:01.729465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.153 [2024-07-13 08:03:01.729481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.153 [2024-07-13 08:03:01.729499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.153 [2024-07-13 08:03:01.729514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.153 [2024-07-13 08:03:01.729531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.153 [2024-07-13 08:03:01.729554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.153 [2024-07-13 08:03:01.729570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.153 [2024-07-13 08:03:01.729585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.153 [2024-07-13 08:03:01.729602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.153 [2024-07-13 08:03:01.729617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.153 [2024-07-13 08:03:01.729633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.153 [2024-07-13 08:03:01.729648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.153 [2024-07-13 08:03:01.729664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.153 [2024-07-13 08:03:01.729679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.153 [2024-07-13 08:03:01.729695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.153 [2024-07-13 08:03:01.729710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.153 [2024-07-13 08:03:01.729726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.153 [2024-07-13 08:03:01.729741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.153 [2024-07-13 08:03:01.729758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.153 [2024-07-13 08:03:01.729772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.153 [2024-07-13 08:03:01.729789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.153 [2024-07-13 08:03:01.729804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.153 [2024-07-13 08:03:01.729819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.153 [2024-07-13 08:03:01.729839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.153 [2024-07-13 08:03:01.729875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.153 [2024-07-13 08:03:01.729892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.153 [2024-07-13 08:03:01.729909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.153 [2024-07-13 08:03:01.729924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.153 [2024-07-13 08:03:01.729940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.729955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.729971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.729985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.730978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.730992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.731008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.731023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.731043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.731058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.731074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.731089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.731105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.731119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.731135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.731150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.731166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.731181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.731197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.731211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.731227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.731242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.154 [2024-07-13 08:03:01.731258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.154 [2024-07-13 08:03:01.731272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.155 [2024-07-13 08:03:01.731288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.155 [2024-07-13 08:03:01.731303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.155 [2024-07-13 08:03:01.731319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.155 [2024-07-13 08:03:01.731333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.155 [2024-07-13 08:03:01.731349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.155 [2024-07-13 08:03:01.731364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.155 [2024-07-13 08:03:01.731379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.155 [2024-07-13 08:03:01.731394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.155 [2024-07-13 08:03:01.731410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:10.155 [2024-07-13 08:03:01.731428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:10.155 [2024-07-13 08:03:01.731444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb7e420 is same with the state(5) to be set 00:16:10.155 [2024-07-13 08:03:01.731519] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb7e420 was disconnected and freed. reset controller. 00:16:10.155 [2024-07-13 08:03:01.732706] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:10.155 08:03:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.155 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:10.155 08:03:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.155 08:03:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:10.155 task offset: 81664 on job bdev=Nvme0n1 fails 00:16:10.155 00:16:10.155 Latency(us) 00:16:10.155 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:10.155 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:10.155 Job: Nvme0n1 ended in about 0.40 seconds with error 00:16:10.155 Verification LBA range: start 0x0 length 0x400 00:16:10.155 Nvme0n1 : 0.40 1434.28 89.64 159.36 0.00 39022.53 2779.21 34175.81 00:16:10.155 =================================================================================================================== 00:16:10.155 Total : 1434.28 89.64 159.36 0.00 39022.53 2779.21 34175.81 00:16:10.155 [2024-07-13 08:03:01.734630] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:10.155 [2024-07-13 08:03:01.734659] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb84000 (9): Bad file descriptor 00:16:10.155 08:03:01 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.155 08:03:01 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:10.155 [2024-07-13 08:03:01.745311] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:11.085 08:03:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1929871 00:16:11.085 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1929871) - No such process 00:16:11.085 08:03:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:11.085 08:03:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:11.085 08:03:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:11.085 08:03:02 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:11.085 08:03:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:11.085 08:03:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:11.085 08:03:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:11.085 08:03:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:11.085 { 00:16:11.085 "params": { 00:16:11.085 "name": "Nvme$subsystem", 00:16:11.085 "trtype": "$TEST_TRANSPORT", 00:16:11.085 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:11.085 "adrfam": "ipv4", 00:16:11.085 "trsvcid": "$NVMF_PORT", 00:16:11.085 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:11.085 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:11.085 "hdgst": ${hdgst:-false}, 00:16:11.085 "ddgst": ${ddgst:-false} 00:16:11.085 }, 00:16:11.085 "method": "bdev_nvme_attach_controller" 00:16:11.085 } 00:16:11.085 EOF 00:16:11.085 )") 00:16:11.085 08:03:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:11.085 08:03:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:11.085 08:03:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:11.085 08:03:02 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:11.085 "params": { 00:16:11.085 "name": "Nvme0", 00:16:11.085 "trtype": "tcp", 00:16:11.085 "traddr": "10.0.0.2", 00:16:11.085 "adrfam": "ipv4", 00:16:11.085 "trsvcid": "4420", 00:16:11.085 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:11.085 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:11.085 "hdgst": false, 00:16:11.085 "ddgst": false 00:16:11.085 }, 00:16:11.085 "method": "bdev_nvme_attach_controller" 00:16:11.085 }' 00:16:11.085 [2024-07-13 08:03:02.788132] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:11.085 [2024-07-13 08:03:02.788262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1930093 ] 00:16:11.085 EAL: No free 2048 kB hugepages reported on node 1 00:16:11.342 [2024-07-13 08:03:02.850391] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.342 [2024-07-13 08:03:02.938300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.600 Running I/O for 1 seconds... 00:16:12.972 00:16:12.972 Latency(us) 00:16:12.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:12.972 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:12.972 Verification LBA range: start 0x0 length 0x400 00:16:12.972 Nvme0n1 : 1.05 1455.94 91.00 0.00 0.00 41653.35 10485.76 53982.25 00:16:12.972 =================================================================================================================== 00:16:12.972 Total : 1455.94 91.00 0.00 0.00 41653.35 10485.76 53982.25 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:12.972 rmmod nvme_tcp 00:16:12.972 rmmod nvme_fabrics 00:16:12.972 rmmod nvme_keyring 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1929690 ']' 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1929690 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1929690 ']' 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1929690 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1929690 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1929690' 00:16:12.972 killing process with pid 1929690 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1929690 00:16:12.972 08:03:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1929690 00:16:13.239 [2024-07-13 08:03:04.795154] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:13.239 08:03:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:13.239 08:03:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:13.239 08:03:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:13.239 08:03:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:13.239 08:03:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:13.239 08:03:04 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.239 08:03:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:13.239 08:03:04 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.138 08:03:06 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:15.138 08:03:06 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:15.138 00:16:15.138 real 0m8.837s 00:16:15.138 user 0m20.179s 00:16:15.138 sys 0m2.678s 00:16:15.138 08:03:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:15.138 08:03:06 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:15.138 ************************************ 00:16:15.138 END TEST nvmf_host_management 00:16:15.138 ************************************ 00:16:15.396 08:03:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:15.396 08:03:06 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:15.396 08:03:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:15.396 08:03:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:15.396 08:03:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:15.396 ************************************ 00:16:15.396 START TEST nvmf_lvol 00:16:15.396 ************************************ 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:15.396 * Looking for test storage... 00:16:15.396 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.396 08:03:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:15.397 08:03:06 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:17.296 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:17.296 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:17.296 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:17.296 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:17.296 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:17.296 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:17.296 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:17.296 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:17.296 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:17.296 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:17.296 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:17.296 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:17.296 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:17.296 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:17.296 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:17.296 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:17.296 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:17.297 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:17.297 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:17.297 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:17.297 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:17.297 08:03:08 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:17.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:16:17.562 00:16:17.562 --- 10.0.0.2 ping statistics --- 00:16:17.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.562 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:17.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:16:17.562 00:16:17.562 --- 10.0.0.1 ping statistics --- 00:16:17.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.562 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1932789 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1932789 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1932789 ']' 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:17.562 08:03:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:17.562 [2024-07-13 08:03:09.190837] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:17.562 [2024-07-13 08:03:09.190939] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.562 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.562 [2024-07-13 08:03:09.254226] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:17.824 [2024-07-13 08:03:09.339792] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.824 [2024-07-13 08:03:09.339847] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.824 [2024-07-13 08:03:09.339876] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.824 [2024-07-13 08:03:09.339916] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.824 [2024-07-13 08:03:09.339928] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.824 [2024-07-13 08:03:09.339974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:17.824 [2024-07-13 08:03:09.340033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:17.824 [2024-07-13 08:03:09.340035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.824 08:03:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:17.824 08:03:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:16:17.824 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:17.824 08:03:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:17.824 08:03:09 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:17.824 08:03:09 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.824 08:03:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:18.082 [2024-07-13 08:03:09.697608] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.082 08:03:09 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:18.340 08:03:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:18.340 08:03:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:18.598 08:03:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:18.598 08:03:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:18.855 08:03:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:19.113 08:03:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=42f0585b-5b83-4abf-ab6b-03d85d846997 00:16:19.113 08:03:10 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 42f0585b-5b83-4abf-ab6b-03d85d846997 lvol 20 00:16:19.370 08:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6c1f8b5f-da28-4804-a9da-af55a5694a1a 00:16:19.370 08:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:19.627 08:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6c1f8b5f-da28-4804-a9da-af55a5694a1a 00:16:19.884 08:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:20.140 [2024-07-13 08:03:11.777558] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:20.140 08:03:11 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:20.397 08:03:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1933212 00:16:20.397 08:03:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:20.397 08:03:12 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:20.397 EAL: No free 2048 kB hugepages reported on node 1 00:16:21.336 08:03:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 6c1f8b5f-da28-4804-a9da-af55a5694a1a MY_SNAPSHOT 00:16:21.616 08:03:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=69e09710-e185-41cf-9cd0-f6546909a2ba 00:16:21.616 08:03:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 6c1f8b5f-da28-4804-a9da-af55a5694a1a 30 00:16:22.182 08:03:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 69e09710-e185-41cf-9cd0-f6546909a2ba MY_CLONE 00:16:22.456 08:03:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=95bbe92a-5862-42f8-b48d-6d1fa82f43f8 00:16:22.456 08:03:13 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 95bbe92a-5862-42f8-b48d-6d1fa82f43f8 00:16:23.021 08:03:14 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1933212 00:16:31.126 Initializing NVMe Controllers 00:16:31.126 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:31.126 Controller IO queue size 128, less than required. 00:16:31.126 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:31.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:31.126 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:31.126 Initialization complete. Launching workers. 00:16:31.126 ======================================================== 00:16:31.126 Latency(us) 00:16:31.126 Device Information : IOPS MiB/s Average min max 00:16:31.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10631.80 41.53 12039.48 1292.20 67508.79 00:16:31.126 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10564.50 41.27 12121.98 2038.88 60963.25 00:16:31.126 ======================================================== 00:16:31.126 Total : 21196.30 82.80 12080.60 1292.20 67508.79 00:16:31.126 00:16:31.126 08:03:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:31.126 08:03:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6c1f8b5f-da28-4804-a9da-af55a5694a1a 00:16:31.388 08:03:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 42f0585b-5b83-4abf-ab6b-03d85d846997 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:31.665 rmmod nvme_tcp 00:16:31.665 rmmod nvme_fabrics 00:16:31.665 rmmod nvme_keyring 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1932789 ']' 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1932789 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1932789 ']' 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1932789 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1932789 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1932789' 00:16:31.665 killing process with pid 1932789 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1932789 00:16:31.665 08:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1932789 00:16:31.923 08:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:31.923 08:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:31.923 08:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:31.923 08:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:31.923 08:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:31.923 08:03:23 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.923 08:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.923 08:03:23 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:34.453 00:16:34.453 real 0m18.773s 00:16:34.453 user 1m4.386s 00:16:34.453 sys 0m5.433s 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:34.453 ************************************ 00:16:34.453 END TEST nvmf_lvol 00:16:34.453 ************************************ 00:16:34.453 08:03:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:34.453 08:03:25 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:34.453 08:03:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:34.453 08:03:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:34.453 08:03:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:34.453 ************************************ 00:16:34.453 START TEST nvmf_lvs_grow 00:16:34.453 ************************************ 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:16:34.453 * Looking for test storage... 00:16:34.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.453 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:16:34.454 08:03:25 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:36.352 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:36.352 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:16:36.352 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:36.352 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:36.352 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:36.352 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:36.352 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:36.352 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:16:36.352 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:36.352 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:16:36.352 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:16:36.352 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:16:36.352 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:36.353 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:36.353 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:36.353 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:36.353 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:36.353 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:36.353 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:16:36.353 00:16:36.353 --- 10.0.0.2 ping statistics --- 00:16:36.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.353 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:36.353 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:36.353 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:16:36.353 00:16:36.353 --- 10.0.0.1 ping statistics --- 00:16:36.353 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:36.353 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1936476 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1936476 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1936476 ']' 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:36.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:36.353 08:03:27 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:36.353 [2024-07-13 08:03:27.980340] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:36.353 [2024-07-13 08:03:27.980422] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:36.353 EAL: No free 2048 kB hugepages reported on node 1 00:16:36.353 [2024-07-13 08:03:28.043717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:36.612 [2024-07-13 08:03:28.133097] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:36.612 [2024-07-13 08:03:28.133152] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:36.612 [2024-07-13 08:03:28.133166] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:36.612 [2024-07-13 08:03:28.133177] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:36.612 [2024-07-13 08:03:28.133187] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:36.612 [2024-07-13 08:03:28.133218] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.612 08:03:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:36.612 08:03:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:16:36.612 08:03:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:36.612 08:03:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:36.612 08:03:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:36.612 08:03:28 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:36.612 08:03:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:36.870 [2024-07-13 08:03:28.537247] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.870 08:03:28 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:16:36.870 08:03:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:16:36.870 08:03:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:36.870 08:03:28 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:36.870 ************************************ 00:16:36.870 START TEST lvs_grow_clean 00:16:36.870 ************************************ 00:16:36.870 08:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:16:36.870 08:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:36.870 08:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:36.870 08:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:36.870 08:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:36.870 08:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:36.870 08:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:36.870 08:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:36.870 08:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:36.870 08:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:37.434 08:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:37.434 08:03:28 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:37.693 08:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ab560198-0cc6-46c7-be20-4fef31e042ff 00:16:37.693 08:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab560198-0cc6-46c7-be20-4fef31e042ff 00:16:37.693 08:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:37.951 08:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:37.951 08:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:37.951 08:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ab560198-0cc6-46c7-be20-4fef31e042ff lvol 150 00:16:38.208 08:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=c1d79bb3-c954-4865-bba2-5ee0198970ab 00:16:38.208 08:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:38.208 08:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:38.466 [2024-07-13 08:03:29.950204] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:38.466 [2024-07-13 08:03:29.950289] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:38.466 true 00:16:38.466 08:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab560198-0cc6-46c7-be20-4fef31e042ff 00:16:38.466 08:03:29 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:38.723 08:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:38.723 08:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:38.981 08:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c1d79bb3-c954-4865-bba2-5ee0198970ab 00:16:39.238 08:03:30 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:39.496 [2024-07-13 08:03:31.021509] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.496 08:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:39.754 08:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1936910 00:16:39.754 08:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:39.754 08:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:39.754 08:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1936910 /var/tmp/bdevperf.sock 00:16:39.754 08:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1936910 ']' 00:16:39.754 08:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:39.754 08:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:39.754 08:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:39.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:39.754 08:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:39.754 08:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:39.754 [2024-07-13 08:03:31.327942] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:39.754 [2024-07-13 08:03:31.328017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1936910 ] 00:16:39.754 EAL: No free 2048 kB hugepages reported on node 1 00:16:39.754 [2024-07-13 08:03:31.388362] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:39.754 [2024-07-13 08:03:31.479288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.012 08:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:40.012 08:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:16:40.012 08:03:31 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:40.577 Nvme0n1 00:16:40.577 08:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:40.577 [ 00:16:40.577 { 00:16:40.577 "name": "Nvme0n1", 00:16:40.577 "aliases": [ 00:16:40.577 "c1d79bb3-c954-4865-bba2-5ee0198970ab" 00:16:40.577 ], 00:16:40.577 "product_name": "NVMe disk", 00:16:40.577 "block_size": 4096, 00:16:40.577 "num_blocks": 38912, 00:16:40.577 "uuid": "c1d79bb3-c954-4865-bba2-5ee0198970ab", 00:16:40.577 "assigned_rate_limits": { 00:16:40.577 "rw_ios_per_sec": 0, 00:16:40.577 "rw_mbytes_per_sec": 0, 00:16:40.577 "r_mbytes_per_sec": 0, 00:16:40.577 "w_mbytes_per_sec": 0 00:16:40.577 }, 00:16:40.577 "claimed": false, 00:16:40.577 "zoned": false, 00:16:40.577 "supported_io_types": { 00:16:40.577 "read": true, 00:16:40.577 "write": true, 00:16:40.577 "unmap": true, 00:16:40.577 "flush": true, 00:16:40.577 "reset": true, 00:16:40.577 "nvme_admin": true, 00:16:40.577 "nvme_io": true, 00:16:40.577 "nvme_io_md": false, 00:16:40.577 "write_zeroes": true, 00:16:40.577 "zcopy": false, 00:16:40.577 "get_zone_info": false, 00:16:40.577 "zone_management": false, 00:16:40.577 "zone_append": false, 00:16:40.577 "compare": true, 00:16:40.577 "compare_and_write": true, 00:16:40.577 "abort": true, 00:16:40.577 "seek_hole": false, 00:16:40.577 "seek_data": false, 00:16:40.577 "copy": true, 00:16:40.577 "nvme_iov_md": false 00:16:40.577 }, 00:16:40.577 "memory_domains": [ 00:16:40.577 { 00:16:40.577 "dma_device_id": "system", 00:16:40.577 "dma_device_type": 1 00:16:40.577 } 00:16:40.577 ], 00:16:40.577 "driver_specific": { 00:16:40.577 "nvme": [ 00:16:40.577 { 00:16:40.577 "trid": { 00:16:40.577 "trtype": "TCP", 00:16:40.577 "adrfam": "IPv4", 00:16:40.577 "traddr": "10.0.0.2", 00:16:40.577 "trsvcid": "4420", 00:16:40.577 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:40.577 }, 00:16:40.577 "ctrlr_data": { 00:16:40.578 "cntlid": 1, 00:16:40.578 "vendor_id": "0x8086", 00:16:40.578 "model_number": "SPDK bdev Controller", 00:16:40.578 "serial_number": "SPDK0", 00:16:40.578 "firmware_revision": "24.09", 00:16:40.578 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:40.578 "oacs": { 00:16:40.578 "security": 0, 00:16:40.578 "format": 0, 00:16:40.578 "firmware": 0, 00:16:40.578 "ns_manage": 0 00:16:40.578 }, 00:16:40.578 "multi_ctrlr": true, 00:16:40.578 "ana_reporting": false 00:16:40.578 }, 00:16:40.578 "vs": { 00:16:40.578 "nvme_version": "1.3" 00:16:40.578 }, 00:16:40.578 "ns_data": { 00:16:40.578 "id": 1, 00:16:40.578 "can_share": true 00:16:40.578 } 00:16:40.578 } 00:16:40.578 ], 00:16:40.578 "mp_policy": "active_passive" 00:16:40.578 } 00:16:40.578 } 00:16:40.578 ] 00:16:40.578 08:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1936958 00:16:40.578 08:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:40.578 08:03:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:40.835 Running I/O for 10 seconds... 00:16:41.767 Latency(us) 00:16:41.767 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:41.767 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:41.767 Nvme0n1 : 1.00 14174.00 55.37 0.00 0.00 0.00 0.00 0.00 00:16:41.767 =================================================================================================================== 00:16:41.767 Total : 14174.00 55.37 0.00 0.00 0.00 0.00 0.00 00:16:41.767 00:16:42.697 08:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ab560198-0cc6-46c7-be20-4fef31e042ff 00:16:42.698 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:42.698 Nvme0n1 : 2.00 14396.50 56.24 0.00 0.00 0.00 0.00 0.00 00:16:42.698 =================================================================================================================== 00:16:42.698 Total : 14396.50 56.24 0.00 0.00 0.00 0.00 0.00 00:16:42.698 00:16:42.955 true 00:16:42.955 08:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab560198-0cc6-46c7-be20-4fef31e042ff 00:16:42.955 08:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:43.213 08:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:43.213 08:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:43.213 08:03:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1936958 00:16:43.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:43.779 Nvme0n1 : 3.00 14531.67 56.76 0.00 0.00 0.00 0.00 0.00 00:16:43.779 =================================================================================================================== 00:16:43.779 Total : 14531.67 56.76 0.00 0.00 0.00 0.00 0.00 00:16:43.779 00:16:44.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:44.710 Nvme0n1 : 4.00 14614.25 57.09 0.00 0.00 0.00 0.00 0.00 00:16:44.710 =================================================================================================================== 00:16:44.710 Total : 14614.25 57.09 0.00 0.00 0.00 0.00 0.00 00:16:44.710 00:16:46.083 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:46.083 Nvme0n1 : 5.00 14783.40 57.75 0.00 0.00 0.00 0.00 0.00 00:16:46.083 =================================================================================================================== 00:16:46.083 Total : 14783.40 57.75 0.00 0.00 0.00 0.00 0.00 00:16:46.083 00:16:47.016 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:47.016 Nvme0n1 : 6.00 14830.67 57.93 0.00 0.00 0.00 0.00 0.00 00:16:47.016 =================================================================================================================== 00:16:47.016 Total : 14830.67 57.93 0.00 0.00 0.00 0.00 0.00 00:16:47.016 00:16:47.949 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:47.949 Nvme0n1 : 7.00 14946.14 58.38 0.00 0.00 0.00 0.00 0.00 00:16:47.949 =================================================================================================================== 00:16:47.949 Total : 14946.14 58.38 0.00 0.00 0.00 0.00 0.00 00:16:47.949 00:16:48.881 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:48.881 Nvme0n1 : 8.00 15008.75 58.63 0.00 0.00 0.00 0.00 0.00 00:16:48.881 =================================================================================================================== 00:16:48.881 Total : 15008.75 58.63 0.00 0.00 0.00 0.00 0.00 00:16:48.881 00:16:49.814 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:49.814 Nvme0n1 : 9.00 15029.44 58.71 0.00 0.00 0.00 0.00 0.00 00:16:49.814 =================================================================================================================== 00:16:49.814 Total : 15029.44 58.71 0.00 0.00 0.00 0.00 0.00 00:16:49.814 00:16:50.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:50.746 Nvme0n1 : 10.00 15090.20 58.95 0.00 0.00 0.00 0.00 0.00 00:16:50.746 =================================================================================================================== 00:16:50.746 Total : 15090.20 58.95 0.00 0.00 0.00 0.00 0.00 00:16:50.746 00:16:50.746 00:16:50.746 Latency(us) 00:16:50.746 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:50.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:50.746 Nvme0n1 : 10.00 15096.84 58.97 0.00 0.00 8473.80 4636.07 18835.53 00:16:50.746 =================================================================================================================== 00:16:50.746 Total : 15096.84 58.97 0.00 0.00 8473.80 4636.07 18835.53 00:16:50.746 0 00:16:50.747 08:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1936910 00:16:50.747 08:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1936910 ']' 00:16:50.747 08:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1936910 00:16:50.747 08:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:16:50.747 08:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:50.747 08:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1936910 00:16:51.004 08:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:51.004 08:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:51.004 08:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1936910' 00:16:51.004 killing process with pid 1936910 00:16:51.004 08:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1936910 00:16:51.004 Received shutdown signal, test time was about 10.000000 seconds 00:16:51.004 00:16:51.004 Latency(us) 00:16:51.004 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:51.004 =================================================================================================================== 00:16:51.004 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:51.004 08:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1936910 00:16:51.004 08:03:42 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:51.567 08:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:51.567 08:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab560198-0cc6-46c7-be20-4fef31e042ff 00:16:51.567 08:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:16:51.824 08:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:16:51.824 08:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:16:51.824 08:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:52.082 [2024-07-13 08:03:43.786648] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:52.339 08:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab560198-0cc6-46c7-be20-4fef31e042ff 00:16:52.339 08:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:16:52.339 08:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab560198-0cc6-46c7-be20-4fef31e042ff 00:16:52.339 08:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:52.339 08:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:52.339 08:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:52.339 08:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:52.339 08:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:52.339 08:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:52.339 08:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:52.339 08:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:52.339 08:03:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab560198-0cc6-46c7-be20-4fef31e042ff 00:16:52.339 request: 00:16:52.339 { 00:16:52.339 "uuid": "ab560198-0cc6-46c7-be20-4fef31e042ff", 00:16:52.339 "method": "bdev_lvol_get_lvstores", 00:16:52.339 "req_id": 1 00:16:52.339 } 00:16:52.339 Got JSON-RPC error response 00:16:52.339 response: 00:16:52.339 { 00:16:52.339 "code": -19, 00:16:52.339 "message": "No such device" 00:16:52.339 } 00:16:52.596 08:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:16:52.596 08:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:52.596 08:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:52.596 08:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:52.596 08:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:52.596 aio_bdev 00:16:52.596 08:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c1d79bb3-c954-4865-bba2-5ee0198970ab 00:16:52.596 08:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=c1d79bb3-c954-4865-bba2-5ee0198970ab 00:16:52.596 08:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:52.596 08:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:16:52.596 08:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:52.596 08:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:52.596 08:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:52.853 08:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c1d79bb3-c954-4865-bba2-5ee0198970ab -t 2000 00:16:53.110 [ 00:16:53.110 { 00:16:53.110 "name": "c1d79bb3-c954-4865-bba2-5ee0198970ab", 00:16:53.110 "aliases": [ 00:16:53.110 "lvs/lvol" 00:16:53.110 ], 00:16:53.110 "product_name": "Logical Volume", 00:16:53.110 "block_size": 4096, 00:16:53.110 "num_blocks": 38912, 00:16:53.110 "uuid": "c1d79bb3-c954-4865-bba2-5ee0198970ab", 00:16:53.110 "assigned_rate_limits": { 00:16:53.110 "rw_ios_per_sec": 0, 00:16:53.110 "rw_mbytes_per_sec": 0, 00:16:53.110 "r_mbytes_per_sec": 0, 00:16:53.110 "w_mbytes_per_sec": 0 00:16:53.110 }, 00:16:53.110 "claimed": false, 00:16:53.110 "zoned": false, 00:16:53.110 "supported_io_types": { 00:16:53.110 "read": true, 00:16:53.110 "write": true, 00:16:53.110 "unmap": true, 00:16:53.110 "flush": false, 00:16:53.110 "reset": true, 00:16:53.110 "nvme_admin": false, 00:16:53.110 "nvme_io": false, 00:16:53.110 "nvme_io_md": false, 00:16:53.110 "write_zeroes": true, 00:16:53.110 "zcopy": false, 00:16:53.110 "get_zone_info": false, 00:16:53.110 "zone_management": false, 00:16:53.110 "zone_append": false, 00:16:53.110 "compare": false, 00:16:53.110 "compare_and_write": false, 00:16:53.110 "abort": false, 00:16:53.110 "seek_hole": true, 00:16:53.110 "seek_data": true, 00:16:53.110 "copy": false, 00:16:53.110 "nvme_iov_md": false 00:16:53.110 }, 00:16:53.110 "driver_specific": { 00:16:53.110 "lvol": { 00:16:53.110 "lvol_store_uuid": "ab560198-0cc6-46c7-be20-4fef31e042ff", 00:16:53.110 "base_bdev": "aio_bdev", 00:16:53.110 "thin_provision": false, 00:16:53.110 "num_allocated_clusters": 38, 00:16:53.110 "snapshot": false, 00:16:53.110 "clone": false, 00:16:53.110 "esnap_clone": false 00:16:53.110 } 00:16:53.110 } 00:16:53.110 } 00:16:53.110 ] 00:16:53.110 08:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:16:53.110 08:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab560198-0cc6-46c7-be20-4fef31e042ff 00:16:53.110 08:03:44 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:16:53.368 08:03:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:16:53.368 08:03:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab560198-0cc6-46c7-be20-4fef31e042ff 00:16:53.368 08:03:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:16:53.625 08:03:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:16:53.625 08:03:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c1d79bb3-c954-4865-bba2-5ee0198970ab 00:16:53.883 08:03:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ab560198-0cc6-46c7-be20-4fef31e042ff 00:16:54.140 08:03:45 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:54.398 08:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:54.656 00:16:54.656 real 0m17.550s 00:16:54.656 user 0m16.978s 00:16:54.656 sys 0m1.931s 00:16:54.656 08:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:54.656 08:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:16:54.656 ************************************ 00:16:54.656 END TEST lvs_grow_clean 00:16:54.656 ************************************ 00:16:54.656 08:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:16:54.656 08:03:46 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:54.656 08:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:54.656 08:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:54.656 08:03:46 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:16:54.656 ************************************ 00:16:54.656 START TEST lvs_grow_dirty 00:16:54.656 ************************************ 00:16:54.656 08:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:16:54.656 08:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:54.656 08:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:54.656 08:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:54.656 08:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:54.656 08:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:54.656 08:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:54.656 08:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:54.656 08:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:54.656 08:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:54.913 08:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:54.913 08:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:55.170 08:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=f4fa9169-668a-4390-9ded-2b70e6cc8a50 00:16:55.170 08:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4fa9169-668a-4390-9ded-2b70e6cc8a50 00:16:55.170 08:03:46 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:55.428 08:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:55.428 08:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:55.428 08:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f4fa9169-668a-4390-9ded-2b70e6cc8a50 lvol 150 00:16:55.685 08:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=b2268717-ddd7-4129-98b2-e3bc618f03c9 00:16:55.685 08:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:55.685 08:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:55.943 [2024-07-13 08:03:47.567338] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:55.943 [2024-07-13 08:03:47.567419] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:55.943 true 00:16:55.943 08:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4fa9169-668a-4390-9ded-2b70e6cc8a50 00:16:55.943 08:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:56.200 08:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:56.200 08:03:47 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:56.458 08:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 b2268717-ddd7-4129-98b2-e3bc618f03c9 00:16:56.716 08:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:56.973 [2024-07-13 08:03:48.654593] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:56.973 08:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:57.231 08:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1938983 00:16:57.231 08:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:57.231 08:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:57.231 08:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1938983 /var/tmp/bdevperf.sock 00:16:57.231 08:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1938983 ']' 00:16:57.231 08:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:57.231 08:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.231 08:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:57.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:57.231 08:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.231 08:03:48 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:16:57.231 [2024-07-13 08:03:48.959457] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:16:57.231 [2024-07-13 08:03:48.959527] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1938983 ] 00:16:57.489 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.489 [2024-07-13 08:03:49.022432] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.489 [2024-07-13 08:03:49.113571] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:57.748 08:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.748 08:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:16:57.748 08:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:58.024 Nvme0n1 00:16:58.024 08:03:49 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:58.590 [ 00:16:58.590 { 00:16:58.590 "name": "Nvme0n1", 00:16:58.590 "aliases": [ 00:16:58.590 "b2268717-ddd7-4129-98b2-e3bc618f03c9" 00:16:58.590 ], 00:16:58.590 "product_name": "NVMe disk", 00:16:58.590 "block_size": 4096, 00:16:58.590 "num_blocks": 38912, 00:16:58.590 "uuid": "b2268717-ddd7-4129-98b2-e3bc618f03c9", 00:16:58.590 "assigned_rate_limits": { 00:16:58.590 "rw_ios_per_sec": 0, 00:16:58.590 "rw_mbytes_per_sec": 0, 00:16:58.590 "r_mbytes_per_sec": 0, 00:16:58.590 "w_mbytes_per_sec": 0 00:16:58.590 }, 00:16:58.590 "claimed": false, 00:16:58.590 "zoned": false, 00:16:58.590 "supported_io_types": { 00:16:58.590 "read": true, 00:16:58.590 "write": true, 00:16:58.590 "unmap": true, 00:16:58.590 "flush": true, 00:16:58.590 "reset": true, 00:16:58.590 "nvme_admin": true, 00:16:58.590 "nvme_io": true, 00:16:58.590 "nvme_io_md": false, 00:16:58.590 "write_zeroes": true, 00:16:58.590 "zcopy": false, 00:16:58.590 "get_zone_info": false, 00:16:58.590 "zone_management": false, 00:16:58.590 "zone_append": false, 00:16:58.590 "compare": true, 00:16:58.590 "compare_and_write": true, 00:16:58.590 "abort": true, 00:16:58.590 "seek_hole": false, 00:16:58.590 "seek_data": false, 00:16:58.590 "copy": true, 00:16:58.590 "nvme_iov_md": false 00:16:58.590 }, 00:16:58.590 "memory_domains": [ 00:16:58.590 { 00:16:58.590 "dma_device_id": "system", 00:16:58.590 "dma_device_type": 1 00:16:58.590 } 00:16:58.590 ], 00:16:58.590 "driver_specific": { 00:16:58.590 "nvme": [ 00:16:58.590 { 00:16:58.590 "trid": { 00:16:58.590 "trtype": "TCP", 00:16:58.590 "adrfam": "IPv4", 00:16:58.590 "traddr": "10.0.0.2", 00:16:58.590 "trsvcid": "4420", 00:16:58.590 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:58.590 }, 00:16:58.590 "ctrlr_data": { 00:16:58.590 "cntlid": 1, 00:16:58.590 "vendor_id": "0x8086", 00:16:58.590 "model_number": "SPDK bdev Controller", 00:16:58.590 "serial_number": "SPDK0", 00:16:58.590 "firmware_revision": "24.09", 00:16:58.590 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:58.590 "oacs": { 00:16:58.590 "security": 0, 00:16:58.590 "format": 0, 00:16:58.590 "firmware": 0, 00:16:58.590 "ns_manage": 0 00:16:58.590 }, 00:16:58.590 "multi_ctrlr": true, 00:16:58.590 "ana_reporting": false 00:16:58.590 }, 00:16:58.590 "vs": { 00:16:58.590 "nvme_version": "1.3" 00:16:58.590 }, 00:16:58.590 "ns_data": { 00:16:58.590 "id": 1, 00:16:58.590 "can_share": true 00:16:58.590 } 00:16:58.590 } 00:16:58.590 ], 00:16:58.590 "mp_policy": "active_passive" 00:16:58.590 } 00:16:58.590 } 00:16:58.590 ] 00:16:58.590 08:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1939119 00:16:58.590 08:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:58.590 08:03:50 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:58.590 Running I/O for 10 seconds... 00:16:59.523 Latency(us) 00:16:59.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.523 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:59.523 Nvme0n1 : 1.00 14408.00 56.28 0.00 0.00 0.00 0.00 0.00 00:16:59.523 =================================================================================================================== 00:16:59.523 Total : 14408.00 56.28 0.00 0.00 0.00 0.00 0.00 00:16:59.523 00:17:00.454 08:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f4fa9169-668a-4390-9ded-2b70e6cc8a50 00:17:00.454 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:00.454 Nvme0n1 : 2.00 14720.50 57.50 0.00 0.00 0.00 0.00 0.00 00:17:00.454 =================================================================================================================== 00:17:00.454 Total : 14720.50 57.50 0.00 0.00 0.00 0.00 0.00 00:17:00.454 00:17:00.711 true 00:17:00.711 08:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4fa9169-668a-4390-9ded-2b70e6cc8a50 00:17:00.711 08:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:00.969 08:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:00.969 08:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:00.969 08:03:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1939119 00:17:01.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:01.534 Nvme0n1 : 3.00 14772.33 57.70 0.00 0.00 0.00 0.00 0.00 00:17:01.534 =================================================================================================================== 00:17:01.534 Total : 14772.33 57.70 0.00 0.00 0.00 0.00 0.00 00:17:01.534 00:17:02.473 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:02.473 Nvme0n1 : 4.00 14783.75 57.75 0.00 0.00 0.00 0.00 0.00 00:17:02.473 =================================================================================================================== 00:17:02.473 Total : 14783.75 57.75 0.00 0.00 0.00 0.00 0.00 00:17:02.473 00:17:03.845 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:03.845 Nvme0n1 : 5.00 14932.80 58.33 0.00 0.00 0.00 0.00 0.00 00:17:03.845 =================================================================================================================== 00:17:03.845 Total : 14932.80 58.33 0.00 0.00 0.00 0.00 0.00 00:17:03.845 00:17:04.779 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.779 Nvme0n1 : 6.00 14942.67 58.37 0.00 0.00 0.00 0.00 0.00 00:17:04.779 =================================================================================================================== 00:17:04.779 Total : 14942.67 58.37 0.00 0.00 0.00 0.00 0.00 00:17:04.779 00:17:05.713 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:05.713 Nvme0n1 : 7.00 14951.00 58.40 0.00 0.00 0.00 0.00 0.00 00:17:05.713 =================================================================================================================== 00:17:05.713 Total : 14951.00 58.40 0.00 0.00 0.00 0.00 0.00 00:17:05.713 00:17:06.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:06.645 Nvme0n1 : 8.00 14966.00 58.46 0.00 0.00 0.00 0.00 0.00 00:17:06.645 =================================================================================================================== 00:17:06.645 Total : 14966.00 58.46 0.00 0.00 0.00 0.00 0.00 00:17:06.645 00:17:07.576 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.576 Nvme0n1 : 9.00 14998.00 58.59 0.00 0.00 0.00 0.00 0.00 00:17:07.576 =================================================================================================================== 00:17:07.576 Total : 14998.00 58.59 0.00 0.00 0.00 0.00 0.00 00:17:07.576 00:17:08.508 00:17:08.508 Latency(us) 00:17:08.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.508 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:08.508 Nvme0n1 : 10.00 15031.22 58.72 0.00 0.00 8510.62 4927.34 16699.54 00:17:08.508 =================================================================================================================== 00:17:08.508 Total : 15031.22 58.72 0.00 0.00 8510.62 4927.34 16699.54 00:17:08.508 0 00:17:08.508 08:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1938983 00:17:08.508 08:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1938983 ']' 00:17:08.508 08:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1938983 00:17:08.508 08:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:17:08.508 08:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:08.508 08:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1938983 00:17:08.508 08:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:08.508 08:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:08.508 08:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1938983' 00:17:08.508 killing process with pid 1938983 00:17:08.508 08:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1938983 00:17:08.508 Received shutdown signal, test time was about 10.000000 seconds 00:17:08.508 00:17:08.508 Latency(us) 00:17:08.508 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.508 =================================================================================================================== 00:17:08.508 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:08.508 08:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1938983 00:17:08.766 08:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:09.023 08:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:09.280 08:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4fa9169-668a-4390-9ded-2b70e6cc8a50 00:17:09.280 08:04:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:09.538 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:09.538 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:09.538 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1936476 00:17:09.538 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1936476 00:17:09.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1936476 Killed "${NVMF_APP[@]}" "$@" 00:17:09.538 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:09.538 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:09.538 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:09.538 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:09.538 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:09.538 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1940442 00:17:09.538 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:09.538 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1940442 00:17:09.538 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1940442 ']' 00:17:09.538 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:09.538 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:09.538 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:09.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:09.538 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:09.538 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:09.796 [2024-07-13 08:04:01.287138] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:09.796 [2024-07-13 08:04:01.287235] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:09.796 EAL: No free 2048 kB hugepages reported on node 1 00:17:09.796 [2024-07-13 08:04:01.351897] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.796 [2024-07-13 08:04:01.439288] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:09.796 [2024-07-13 08:04:01.439352] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:09.796 [2024-07-13 08:04:01.439366] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:09.796 [2024-07-13 08:04:01.439377] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:09.796 [2024-07-13 08:04:01.439400] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:09.796 [2024-07-13 08:04:01.439428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:10.053 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:10.053 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:10.053 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:10.053 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:10.053 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:10.053 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.053 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:10.311 [2024-07-13 08:04:01.810107] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:10.311 [2024-07-13 08:04:01.810266] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:10.311 [2024-07-13 08:04:01.810322] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:10.311 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:10.311 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev b2268717-ddd7-4129-98b2-e3bc618f03c9 00:17:10.311 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=b2268717-ddd7-4129-98b2-e3bc618f03c9 00:17:10.311 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:10.311 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:10.311 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:10.311 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:10.311 08:04:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:10.570 08:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b2268717-ddd7-4129-98b2-e3bc618f03c9 -t 2000 00:17:10.829 [ 00:17:10.829 { 00:17:10.829 "name": "b2268717-ddd7-4129-98b2-e3bc618f03c9", 00:17:10.829 "aliases": [ 00:17:10.829 "lvs/lvol" 00:17:10.829 ], 00:17:10.829 "product_name": "Logical Volume", 00:17:10.829 "block_size": 4096, 00:17:10.829 "num_blocks": 38912, 00:17:10.829 "uuid": "b2268717-ddd7-4129-98b2-e3bc618f03c9", 00:17:10.829 "assigned_rate_limits": { 00:17:10.829 "rw_ios_per_sec": 0, 00:17:10.829 "rw_mbytes_per_sec": 0, 00:17:10.829 "r_mbytes_per_sec": 0, 00:17:10.829 "w_mbytes_per_sec": 0 00:17:10.829 }, 00:17:10.829 "claimed": false, 00:17:10.829 "zoned": false, 00:17:10.829 "supported_io_types": { 00:17:10.829 "read": true, 00:17:10.829 "write": true, 00:17:10.829 "unmap": true, 00:17:10.829 "flush": false, 00:17:10.829 "reset": true, 00:17:10.829 "nvme_admin": false, 00:17:10.829 "nvme_io": false, 00:17:10.829 "nvme_io_md": false, 00:17:10.829 "write_zeroes": true, 00:17:10.829 "zcopy": false, 00:17:10.829 "get_zone_info": false, 00:17:10.829 "zone_management": false, 00:17:10.829 "zone_append": false, 00:17:10.829 "compare": false, 00:17:10.829 "compare_and_write": false, 00:17:10.829 "abort": false, 00:17:10.829 "seek_hole": true, 00:17:10.829 "seek_data": true, 00:17:10.829 "copy": false, 00:17:10.829 "nvme_iov_md": false 00:17:10.829 }, 00:17:10.829 "driver_specific": { 00:17:10.829 "lvol": { 00:17:10.829 "lvol_store_uuid": "f4fa9169-668a-4390-9ded-2b70e6cc8a50", 00:17:10.829 "base_bdev": "aio_bdev", 00:17:10.829 "thin_provision": false, 00:17:10.829 "num_allocated_clusters": 38, 00:17:10.829 "snapshot": false, 00:17:10.829 "clone": false, 00:17:10.829 "esnap_clone": false 00:17:10.829 } 00:17:10.829 } 00:17:10.829 } 00:17:10.829 ] 00:17:10.829 08:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:10.829 08:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4fa9169-668a-4390-9ded-2b70e6cc8a50 00:17:10.829 08:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:11.088 08:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:11.088 08:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4fa9169-668a-4390-9ded-2b70e6cc8a50 00:17:11.088 08:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:11.351 08:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:11.351 08:04:02 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:11.608 [2024-07-13 08:04:03.143301] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:11.608 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4fa9169-668a-4390-9ded-2b70e6cc8a50 00:17:11.608 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:11.608 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4fa9169-668a-4390-9ded-2b70e6cc8a50 00:17:11.608 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.608 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.608 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.608 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.608 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.608 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:11.608 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.608 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:11.608 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4fa9169-668a-4390-9ded-2b70e6cc8a50 00:17:11.865 request: 00:17:11.865 { 00:17:11.865 "uuid": "f4fa9169-668a-4390-9ded-2b70e6cc8a50", 00:17:11.865 "method": "bdev_lvol_get_lvstores", 00:17:11.865 "req_id": 1 00:17:11.865 } 00:17:11.865 Got JSON-RPC error response 00:17:11.865 response: 00:17:11.865 { 00:17:11.865 "code": -19, 00:17:11.865 "message": "No such device" 00:17:11.865 } 00:17:11.865 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:11.865 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:11.865 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:11.865 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:11.865 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:12.123 aio_bdev 00:17:12.123 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev b2268717-ddd7-4129-98b2-e3bc618f03c9 00:17:12.123 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=b2268717-ddd7-4129-98b2-e3bc618f03c9 00:17:12.123 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:12.123 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:12.123 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:12.123 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:12.123 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:12.380 08:04:03 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b b2268717-ddd7-4129-98b2-e3bc618f03c9 -t 2000 00:17:12.638 [ 00:17:12.638 { 00:17:12.638 "name": "b2268717-ddd7-4129-98b2-e3bc618f03c9", 00:17:12.638 "aliases": [ 00:17:12.638 "lvs/lvol" 00:17:12.638 ], 00:17:12.638 "product_name": "Logical Volume", 00:17:12.638 "block_size": 4096, 00:17:12.638 "num_blocks": 38912, 00:17:12.638 "uuid": "b2268717-ddd7-4129-98b2-e3bc618f03c9", 00:17:12.638 "assigned_rate_limits": { 00:17:12.638 "rw_ios_per_sec": 0, 00:17:12.638 "rw_mbytes_per_sec": 0, 00:17:12.638 "r_mbytes_per_sec": 0, 00:17:12.638 "w_mbytes_per_sec": 0 00:17:12.638 }, 00:17:12.638 "claimed": false, 00:17:12.638 "zoned": false, 00:17:12.638 "supported_io_types": { 00:17:12.638 "read": true, 00:17:12.638 "write": true, 00:17:12.638 "unmap": true, 00:17:12.638 "flush": false, 00:17:12.638 "reset": true, 00:17:12.638 "nvme_admin": false, 00:17:12.638 "nvme_io": false, 00:17:12.638 "nvme_io_md": false, 00:17:12.638 "write_zeroes": true, 00:17:12.638 "zcopy": false, 00:17:12.638 "get_zone_info": false, 00:17:12.638 "zone_management": false, 00:17:12.638 "zone_append": false, 00:17:12.638 "compare": false, 00:17:12.638 "compare_and_write": false, 00:17:12.638 "abort": false, 00:17:12.638 "seek_hole": true, 00:17:12.638 "seek_data": true, 00:17:12.638 "copy": false, 00:17:12.638 "nvme_iov_md": false 00:17:12.638 }, 00:17:12.638 "driver_specific": { 00:17:12.638 "lvol": { 00:17:12.638 "lvol_store_uuid": "f4fa9169-668a-4390-9ded-2b70e6cc8a50", 00:17:12.638 "base_bdev": "aio_bdev", 00:17:12.638 "thin_provision": false, 00:17:12.638 "num_allocated_clusters": 38, 00:17:12.638 "snapshot": false, 00:17:12.638 "clone": false, 00:17:12.638 "esnap_clone": false 00:17:12.638 } 00:17:12.638 } 00:17:12.638 } 00:17:12.638 ] 00:17:12.638 08:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:12.638 08:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4fa9169-668a-4390-9ded-2b70e6cc8a50 00:17:12.638 08:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:12.895 08:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:12.895 08:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f4fa9169-668a-4390-9ded-2b70e6cc8a50 00:17:12.895 08:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:13.153 08:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:13.153 08:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete b2268717-ddd7-4129-98b2-e3bc618f03c9 00:17:13.411 08:04:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f4fa9169-668a-4390-9ded-2b70e6cc8a50 00:17:13.667 08:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:13.925 08:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:13.925 00:17:13.925 real 0m19.365s 00:17:13.925 user 0m48.966s 00:17:13.925 sys 0m4.677s 00:17:13.925 08:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:13.925 08:04:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:13.925 ************************************ 00:17:13.925 END TEST lvs_grow_dirty 00:17:13.925 ************************************ 00:17:13.925 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:13.925 08:04:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:13.925 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:17:13.925 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:17:13.925 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:13.925 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:13.925 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:13.925 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:13.925 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:13.925 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:13.925 nvmf_trace.0 00:17:13.925 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:17:13.925 08:04:05 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:13.925 08:04:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:13.925 08:04:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:13.925 08:04:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:13.925 08:04:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:13.925 08:04:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:13.925 08:04:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:13.925 rmmod nvme_tcp 00:17:13.925 rmmod nvme_fabrics 00:17:14.183 rmmod nvme_keyring 00:17:14.183 08:04:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:14.183 08:04:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:14.183 08:04:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:14.183 08:04:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1940442 ']' 00:17:14.183 08:04:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1940442 00:17:14.183 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1940442 ']' 00:17:14.183 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1940442 00:17:14.183 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:17:14.183 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:14.183 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1940442 00:17:14.183 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:14.183 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:14.183 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1940442' 00:17:14.183 killing process with pid 1940442 00:17:14.183 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1940442 00:17:14.183 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1940442 00:17:14.441 08:04:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:14.441 08:04:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:14.441 08:04:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:14.441 08:04:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:14.441 08:04:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:14.441 08:04:05 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:14.441 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:14.441 08:04:05 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.356 08:04:07 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:16.356 00:17:16.356 real 0m42.236s 00:17:16.356 user 1m11.719s 00:17:16.356 sys 0m8.462s 00:17:16.356 08:04:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:16.356 08:04:07 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:16.356 ************************************ 00:17:16.356 END TEST nvmf_lvs_grow 00:17:16.356 ************************************ 00:17:16.356 08:04:08 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:16.356 08:04:08 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:16.356 08:04:08 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:16.356 08:04:08 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:16.356 08:04:08 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:16.356 ************************************ 00:17:16.356 START TEST nvmf_bdev_io_wait 00:17:16.356 ************************************ 00:17:16.356 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:16.356 * Looking for test storage... 00:17:16.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:16.356 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:16.356 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:16.356 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:16.356 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:16.356 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:16.356 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:16.356 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:16.356 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:16.356 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:16.356 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:16.356 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:16.356 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:16.615 08:04:08 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:18.567 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:18.568 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:18.568 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:18.568 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:18.568 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:18.568 08:04:09 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:18.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:17:18.568 00:17:18.568 --- 10.0.0.2 ping statistics --- 00:17:18.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.568 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:18.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:17:18.568 00:17:18.568 --- 10.0.0.1 ping statistics --- 00:17:18.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.568 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1942964 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1942964 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1942964 ']' 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.568 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:18.568 [2024-07-13 08:04:10.209598] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:18.568 [2024-07-13 08:04:10.209700] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.568 EAL: No free 2048 kB hugepages reported on node 1 00:17:18.568 [2024-07-13 08:04:10.275012] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:18.827 [2024-07-13 08:04:10.363940] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.827 [2024-07-13 08:04:10.363992] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.827 [2024-07-13 08:04:10.364016] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:18.827 [2024-07-13 08:04:10.364026] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:18.827 [2024-07-13 08:04:10.364036] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.827 [2024-07-13 08:04:10.364246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.827 [2024-07-13 08:04:10.364311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:18.827 [2024-07-13 08:04:10.364377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:18.827 [2024-07-13 08:04:10.364379] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.827 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:18.827 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:17:18.827 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:18.827 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:18.827 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:18.827 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:18.827 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:18.827 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.827 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:18.827 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.827 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:18.827 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.827 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:18.827 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.827 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:18.827 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.827 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:18.827 [2024-07-13 08:04:10.520604] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:18.827 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.827 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:18.827 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.827 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:19.085 Malloc0 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:19.085 [2024-07-13 08:04:10.589621] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1942987 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1942988 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1942990 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:19.085 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:19.086 { 00:17:19.086 "params": { 00:17:19.086 "name": "Nvme$subsystem", 00:17:19.086 "trtype": "$TEST_TRANSPORT", 00:17:19.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:19.086 "adrfam": "ipv4", 00:17:19.086 "trsvcid": "$NVMF_PORT", 00:17:19.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:19.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:19.086 "hdgst": ${hdgst:-false}, 00:17:19.086 "ddgst": ${ddgst:-false} 00:17:19.086 }, 00:17:19.086 "method": "bdev_nvme_attach_controller" 00:17:19.086 } 00:17:19.086 EOF 00:17:19.086 )") 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1942993 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:19.086 { 00:17:19.086 "params": { 00:17:19.086 "name": "Nvme$subsystem", 00:17:19.086 "trtype": "$TEST_TRANSPORT", 00:17:19.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:19.086 "adrfam": "ipv4", 00:17:19.086 "trsvcid": "$NVMF_PORT", 00:17:19.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:19.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:19.086 "hdgst": ${hdgst:-false}, 00:17:19.086 "ddgst": ${ddgst:-false} 00:17:19.086 }, 00:17:19.086 "method": "bdev_nvme_attach_controller" 00:17:19.086 } 00:17:19.086 EOF 00:17:19.086 )") 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:19.086 { 00:17:19.086 "params": { 00:17:19.086 "name": "Nvme$subsystem", 00:17:19.086 "trtype": "$TEST_TRANSPORT", 00:17:19.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:19.086 "adrfam": "ipv4", 00:17:19.086 "trsvcid": "$NVMF_PORT", 00:17:19.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:19.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:19.086 "hdgst": ${hdgst:-false}, 00:17:19.086 "ddgst": ${ddgst:-false} 00:17:19.086 }, 00:17:19.086 "method": "bdev_nvme_attach_controller" 00:17:19.086 } 00:17:19.086 EOF 00:17:19.086 )") 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:19.086 { 00:17:19.086 "params": { 00:17:19.086 "name": "Nvme$subsystem", 00:17:19.086 "trtype": "$TEST_TRANSPORT", 00:17:19.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:19.086 "adrfam": "ipv4", 00:17:19.086 "trsvcid": "$NVMF_PORT", 00:17:19.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:19.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:19.086 "hdgst": ${hdgst:-false}, 00:17:19.086 "ddgst": ${ddgst:-false} 00:17:19.086 }, 00:17:19.086 "method": "bdev_nvme_attach_controller" 00:17:19.086 } 00:17:19.086 EOF 00:17:19.086 )") 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1942987 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:19.086 "params": { 00:17:19.086 "name": "Nvme1", 00:17:19.086 "trtype": "tcp", 00:17:19.086 "traddr": "10.0.0.2", 00:17:19.086 "adrfam": "ipv4", 00:17:19.086 "trsvcid": "4420", 00:17:19.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:19.086 "hdgst": false, 00:17:19.086 "ddgst": false 00:17:19.086 }, 00:17:19.086 "method": "bdev_nvme_attach_controller" 00:17:19.086 }' 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:19.086 "params": { 00:17:19.086 "name": "Nvme1", 00:17:19.086 "trtype": "tcp", 00:17:19.086 "traddr": "10.0.0.2", 00:17:19.086 "adrfam": "ipv4", 00:17:19.086 "trsvcid": "4420", 00:17:19.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:19.086 "hdgst": false, 00:17:19.086 "ddgst": false 00:17:19.086 }, 00:17:19.086 "method": "bdev_nvme_attach_controller" 00:17:19.086 }' 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:19.086 "params": { 00:17:19.086 "name": "Nvme1", 00:17:19.086 "trtype": "tcp", 00:17:19.086 "traddr": "10.0.0.2", 00:17:19.086 "adrfam": "ipv4", 00:17:19.086 "trsvcid": "4420", 00:17:19.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:19.086 "hdgst": false, 00:17:19.086 "ddgst": false 00:17:19.086 }, 00:17:19.086 "method": "bdev_nvme_attach_controller" 00:17:19.086 }' 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:19.086 08:04:10 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:19.086 "params": { 00:17:19.086 "name": "Nvme1", 00:17:19.086 "trtype": "tcp", 00:17:19.086 "traddr": "10.0.0.2", 00:17:19.086 "adrfam": "ipv4", 00:17:19.086 "trsvcid": "4420", 00:17:19.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:19.086 "hdgst": false, 00:17:19.086 "ddgst": false 00:17:19.086 }, 00:17:19.086 "method": "bdev_nvme_attach_controller" 00:17:19.086 }' 00:17:19.086 [2024-07-13 08:04:10.637128] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:19.086 [2024-07-13 08:04:10.637219] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:17:19.086 [2024-07-13 08:04:10.638108] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:19.086 [2024-07-13 08:04:10.638109] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:19.086 [2024-07-13 08:04:10.638109] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:19.086 [2024-07-13 08:04:10.638199] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-13 08:04:10.638200] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-13 08:04:10.638202] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:17:19.086 --proc-type=auto ] 00:17:19.086 --proc-type=auto ] 00:17:19.086 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.086 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.086 [2024-07-13 08:04:10.817251] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.344 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.344 [2024-07-13 08:04:10.892111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:19.344 [2024-07-13 08:04:10.917308] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.344 EAL: No free 2048 kB hugepages reported on node 1 00:17:19.344 [2024-07-13 08:04:10.992967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:19.344 [2024-07-13 08:04:11.016625] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.602 [2024-07-13 08:04:11.087181] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.602 [2024-07-13 08:04:11.090727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:19.602 [2024-07-13 08:04:11.156777] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:17:19.602 Running I/O for 1 seconds... 00:17:19.863 Running I/O for 1 seconds... 00:17:19.863 Running I/O for 1 seconds... 00:17:19.863 Running I/O for 1 seconds... 00:17:20.800 00:17:20.800 Latency(us) 00:17:20.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.800 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:20.800 Nvme1n1 : 1.01 9038.11 35.31 0.00 0.00 14088.26 9369.22 20097.71 00:17:20.800 =================================================================================================================== 00:17:20.800 Total : 9038.11 35.31 0.00 0.00 14088.26 9369.22 20097.71 00:17:20.800 00:17:20.800 Latency(us) 00:17:20.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.800 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:20.800 Nvme1n1 : 1.01 7959.83 31.09 0.00 0.00 16001.74 6359.42 20874.43 00:17:20.800 =================================================================================================================== 00:17:20.800 Total : 7959.83 31.09 0.00 0.00 16001.74 6359.42 20874.43 00:17:20.800 00:17:20.800 Latency(us) 00:17:20.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.800 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:20.800 Nvme1n1 : 1.01 8819.37 34.45 0.00 0.00 14457.07 2985.53 22039.51 00:17:20.800 =================================================================================================================== 00:17:20.800 Total : 8819.37 34.45 0.00 0.00 14457.07 2985.53 22039.51 00:17:20.800 00:17:20.800 Latency(us) 00:17:20.800 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.800 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:20.800 Nvme1n1 : 1.00 199175.07 778.03 0.00 0.00 639.97 282.17 855.61 00:17:20.800 =================================================================================================================== 00:17:20.800 Total : 199175.07 778.03 0.00 0.00 639.97 282.17 855.61 00:17:21.059 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1942988 00:17:21.059 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1942990 00:17:21.319 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1942993 00:17:21.319 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:21.319 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.319 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:21.319 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.319 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:21.319 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:21.319 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:21.320 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:21.320 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:21.320 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:21.320 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:21.320 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:21.320 rmmod nvme_tcp 00:17:21.320 rmmod nvme_fabrics 00:17:21.320 rmmod nvme_keyring 00:17:21.320 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:21.320 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:21.320 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:21.320 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1942964 ']' 00:17:21.320 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1942964 00:17:21.320 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1942964 ']' 00:17:21.320 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1942964 00:17:21.320 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:17:21.320 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:21.320 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1942964 00:17:21.320 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:21.320 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:21.320 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1942964' 00:17:21.320 killing process with pid 1942964 00:17:21.320 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1942964 00:17:21.320 08:04:12 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1942964 00:17:21.578 08:04:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:21.578 08:04:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:21.578 08:04:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:21.578 08:04:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:21.578 08:04:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:21.578 08:04:13 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:21.578 08:04:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:21.578 08:04:13 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.482 08:04:15 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:23.482 00:17:23.482 real 0m7.117s 00:17:23.482 user 0m16.266s 00:17:23.482 sys 0m3.799s 00:17:23.482 08:04:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:23.482 08:04:15 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:23.482 ************************************ 00:17:23.482 END TEST nvmf_bdev_io_wait 00:17:23.482 ************************************ 00:17:23.482 08:04:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:23.482 08:04:15 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:23.482 08:04:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:23.482 08:04:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:23.482 08:04:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:23.482 ************************************ 00:17:23.482 START TEST nvmf_queue_depth 00:17:23.482 ************************************ 00:17:23.482 08:04:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:23.741 * Looking for test storage... 00:17:23.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:23.741 08:04:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:25.687 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:25.687 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:25.687 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:25.687 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:25.688 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:25.688 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:25.946 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:25.946 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:17:25.946 00:17:25.946 --- 10.0.0.2 ping statistics --- 00:17:25.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.946 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:25.946 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:25.946 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:17:25.946 00:17:25.946 --- 10.0.0.1 ping statistics --- 00:17:25.946 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:25.946 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1945207 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1945207 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1945207 ']' 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:25.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:25.946 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:25.946 [2024-07-13 08:04:17.577100] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:25.946 [2024-07-13 08:04:17.577199] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.946 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.946 [2024-07-13 08:04:17.641025] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.204 [2024-07-13 08:04:17.726672] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:26.204 [2024-07-13 08:04:17.726738] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:26.204 [2024-07-13 08:04:17.726762] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:26.204 [2024-07-13 08:04:17.726773] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:26.204 [2024-07-13 08:04:17.726782] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:26.204 [2024-07-13 08:04:17.726822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:26.204 [2024-07-13 08:04:17.866850] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:26.204 Malloc0 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:26.204 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.205 08:04:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.205 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.205 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:26.205 [2024-07-13 08:04:17.924690] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.205 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.205 08:04:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1945356 00:17:26.205 08:04:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:26.205 08:04:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:26.205 08:04:17 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1945356 /var/tmp/bdevperf.sock 00:17:26.205 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1945356 ']' 00:17:26.205 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:26.205 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:26.205 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:26.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:26.205 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:26.205 08:04:17 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:26.463 [2024-07-13 08:04:17.971548] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:26.463 [2024-07-13 08:04:17.971612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1945356 ] 00:17:26.463 EAL: No free 2048 kB hugepages reported on node 1 00:17:26.463 [2024-07-13 08:04:18.032645] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:26.463 [2024-07-13 08:04:18.122624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:26.721 08:04:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:26.721 08:04:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:26.721 08:04:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:26.721 08:04:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.721 08:04:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:26.978 NVMe0n1 00:17:26.978 08:04:18 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.978 08:04:18 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:26.978 Running I/O for 10 seconds... 00:17:39.172 00:17:39.172 Latency(us) 00:17:39.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.173 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:17:39.173 Verification LBA range: start 0x0 length 0x4000 00:17:39.173 NVMe0n1 : 10.10 8595.90 33.58 0.00 0.00 118613.53 24272.59 72623.60 00:17:39.173 =================================================================================================================== 00:17:39.173 Total : 8595.90 33.58 0.00 0.00 118613.53 24272.59 72623.60 00:17:39.173 0 00:17:39.173 08:04:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1945356 00:17:39.173 08:04:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1945356 ']' 00:17:39.173 08:04:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1945356 00:17:39.173 08:04:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:17:39.173 08:04:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:39.173 08:04:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1945356 00:17:39.173 08:04:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:39.173 08:04:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:39.173 08:04:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1945356' 00:17:39.173 killing process with pid 1945356 00:17:39.173 08:04:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1945356 00:17:39.173 Received shutdown signal, test time was about 10.000000 seconds 00:17:39.173 00:17:39.173 Latency(us) 00:17:39.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.173 =================================================================================================================== 00:17:39.173 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:39.173 08:04:28 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1945356 00:17:39.173 08:04:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:39.173 08:04:28 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:17:39.173 08:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:39.173 08:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:17:39.173 08:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:39.173 08:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:17:39.173 08:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:39.173 08:04:28 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:39.173 rmmod nvme_tcp 00:17:39.173 rmmod nvme_fabrics 00:17:39.173 rmmod nvme_keyring 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1945207 ']' 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1945207 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1945207 ']' 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1945207 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1945207 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1945207' 00:17:39.173 killing process with pid 1945207 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1945207 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1945207 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:39.173 08:04:29 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.741 08:04:31 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:39.741 00:17:39.741 real 0m16.153s 00:17:39.741 user 0m22.672s 00:17:39.741 sys 0m3.076s 00:17:39.741 08:04:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:39.741 08:04:31 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:39.741 ************************************ 00:17:39.741 END TEST nvmf_queue_depth 00:17:39.741 ************************************ 00:17:39.741 08:04:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:39.741 08:04:31 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:39.741 08:04:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:39.741 08:04:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:39.741 08:04:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:39.741 ************************************ 00:17:39.741 START TEST nvmf_target_multipath 00:17:39.741 ************************************ 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:17:39.741 * Looking for test storage... 00:17:39.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:17:39.741 08:04:31 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:41.643 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:41.643 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:41.643 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:41.643 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:41.643 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:41.644 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:41.644 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:41.644 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:41.644 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:41.644 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:41.644 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:41.644 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:41.644 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:41.644 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:41.644 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:41.644 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:41.644 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:41.644 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:41.644 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:41.644 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:41.644 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:41.903 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:41.903 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:17:41.903 00:17:41.903 --- 10.0.0.2 ping statistics --- 00:17:41.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.903 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:41.903 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:41.903 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:17:41.903 00:17:41.903 --- 10.0.0.1 ping statistics --- 00:17:41.903 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:41.903 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:41.903 only one NIC for nvmf test 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:41.903 rmmod nvme_tcp 00:17:41.903 rmmod nvme_fabrics 00:17:41.903 rmmod nvme_keyring 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:41.903 08:04:33 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.806 08:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:43.806 08:04:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:17:43.806 08:04:35 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:17:43.806 08:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:43.806 08:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:17:43.806 08:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:43.806 08:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:17:43.806 08:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:43.806 08:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:43.806 08:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:43.806 08:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:17:43.806 08:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:17:43.807 08:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:17:43.807 08:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:43.807 08:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:43.807 08:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:43.807 08:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:43.807 08:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:43.807 08:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:43.807 08:04:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:43.807 08:04:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:43.807 08:04:35 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:43.807 00:17:43.807 real 0m4.145s 00:17:43.807 user 0m0.721s 00:17:43.807 sys 0m1.421s 00:17:43.807 08:04:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:43.807 08:04:35 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:17:43.807 ************************************ 00:17:43.807 END TEST nvmf_target_multipath 00:17:43.807 ************************************ 00:17:44.066 08:04:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:44.066 08:04:35 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:44.066 08:04:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:44.066 08:04:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:44.066 08:04:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:44.066 ************************************ 00:17:44.066 START TEST nvmf_zcopy 00:17:44.066 ************************************ 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:44.066 * Looking for test storage... 00:17:44.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:17:44.066 08:04:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:45.967 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:45.967 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:45.968 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:45.968 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:45.968 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:45.968 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:45.968 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:46.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:46.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:17:46.226 00:17:46.226 --- 10.0.0.2 ping statistics --- 00:17:46.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.226 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:46.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:46.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:17:46.226 00:17:46.226 --- 10.0.0.1 ping statistics --- 00:17:46.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:46.226 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1950399 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1950399 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1950399 ']' 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:46.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:46.226 08:04:37 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:46.226 [2024-07-13 08:04:37.802960] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:46.226 [2024-07-13 08:04:37.803033] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:46.226 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.226 [2024-07-13 08:04:37.872623] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.485 [2024-07-13 08:04:37.963739] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:46.485 [2024-07-13 08:04:37.963790] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:46.485 [2024-07-13 08:04:37.963815] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:46.485 [2024-07-13 08:04:37.963828] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:46.485 [2024-07-13 08:04:37.963840] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:46.485 [2024-07-13 08:04:37.963895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:46.485 [2024-07-13 08:04:38.115333] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:46.485 [2024-07-13 08:04:38.131518] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:46.485 malloc0 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:46.485 { 00:17:46.485 "params": { 00:17:46.485 "name": "Nvme$subsystem", 00:17:46.485 "trtype": "$TEST_TRANSPORT", 00:17:46.485 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:46.485 "adrfam": "ipv4", 00:17:46.485 "trsvcid": "$NVMF_PORT", 00:17:46.485 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:46.485 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:46.485 "hdgst": ${hdgst:-false}, 00:17:46.485 "ddgst": ${ddgst:-false} 00:17:46.485 }, 00:17:46.485 "method": "bdev_nvme_attach_controller" 00:17:46.485 } 00:17:46.485 EOF 00:17:46.485 )") 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:46.485 08:04:38 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:46.485 "params": { 00:17:46.485 "name": "Nvme1", 00:17:46.485 "trtype": "tcp", 00:17:46.485 "traddr": "10.0.0.2", 00:17:46.485 "adrfam": "ipv4", 00:17:46.485 "trsvcid": "4420", 00:17:46.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:46.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:46.485 "hdgst": false, 00:17:46.485 "ddgst": false 00:17:46.485 }, 00:17:46.485 "method": "bdev_nvme_attach_controller" 00:17:46.485 }' 00:17:46.485 [2024-07-13 08:04:38.215221] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:46.485 [2024-07-13 08:04:38.215306] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1950426 ] 00:17:46.743 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.743 [2024-07-13 08:04:38.284244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.743 [2024-07-13 08:04:38.377636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:47.001 Running I/O for 10 seconds... 00:17:57.022 00:17:57.022 Latency(us) 00:17:57.023 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.023 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:57.023 Verification LBA range: start 0x0 length 0x1000 00:17:57.023 Nvme1n1 : 10.02 5873.98 45.89 0.00 0.00 21731.14 3276.80 32039.82 00:17:57.023 =================================================================================================================== 00:17:57.023 Total : 5873.98 45.89 0.00 0.00 21731.14 3276.80 32039.82 00:17:57.280 08:04:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1951731 00:17:57.280 08:04:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:17:57.280 08:04:48 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:57.280 08:04:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:57.280 08:04:48 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:57.280 08:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:17:57.280 08:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:17:57.280 08:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:57.280 08:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:57.280 { 00:17:57.280 "params": { 00:17:57.280 "name": "Nvme$subsystem", 00:17:57.280 "trtype": "$TEST_TRANSPORT", 00:17:57.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:57.280 "adrfam": "ipv4", 00:17:57.280 "trsvcid": "$NVMF_PORT", 00:17:57.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:57.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:57.280 "hdgst": ${hdgst:-false}, 00:17:57.280 "ddgst": ${ddgst:-false} 00:17:57.280 }, 00:17:57.280 "method": "bdev_nvme_attach_controller" 00:17:57.280 } 00:17:57.280 EOF 00:17:57.280 )") 00:17:57.280 08:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:17:57.280 [2024-07-13 08:04:48.877662] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.280 [2024-07-13 08:04:48.877708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.280 08:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:17:57.280 08:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:17:57.280 08:04:48 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:57.280 "params": { 00:17:57.280 "name": "Nvme1", 00:17:57.280 "trtype": "tcp", 00:17:57.280 "traddr": "10.0.0.2", 00:17:57.280 "adrfam": "ipv4", 00:17:57.280 "trsvcid": "4420", 00:17:57.280 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.280 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:57.280 "hdgst": false, 00:17:57.280 "ddgst": false 00:17:57.280 }, 00:17:57.280 "method": "bdev_nvme_attach_controller" 00:17:57.280 }' 00:17:57.280 [2024-07-13 08:04:48.885634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.280 [2024-07-13 08:04:48.885662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.280 [2024-07-13 08:04:48.893650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.280 [2024-07-13 08:04:48.893674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.280 [2024-07-13 08:04:48.901665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.280 [2024-07-13 08:04:48.901687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.280 [2024-07-13 08:04:48.909682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.280 [2024-07-13 08:04:48.909703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.280 [2024-07-13 08:04:48.915435] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:17:57.280 [2024-07-13 08:04:48.915496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1951731 ] 00:17:57.280 [2024-07-13 08:04:48.917707] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.280 [2024-07-13 08:04:48.917729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.280 [2024-07-13 08:04:48.925728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.280 [2024-07-13 08:04:48.925750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.280 [2024-07-13 08:04:48.933745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.280 [2024-07-13 08:04:48.933765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.280 [2024-07-13 08:04:48.941767] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.280 [2024-07-13 08:04:48.941787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.280 EAL: No free 2048 kB hugepages reported on node 1 00:17:57.280 [2024-07-13 08:04:48.949805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.280 [2024-07-13 08:04:48.949831] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.280 [2024-07-13 08:04:48.957829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.281 [2024-07-13 08:04:48.957854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.281 [2024-07-13 08:04:48.965853] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.281 [2024-07-13 08:04:48.965912] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.281 [2024-07-13 08:04:48.973885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.281 [2024-07-13 08:04:48.973922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.281 [2024-07-13 08:04:48.978450] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.281 [2024-07-13 08:04:48.981920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.281 [2024-07-13 08:04:48.981943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.281 [2024-07-13 08:04:48.989962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.281 [2024-07-13 08:04:48.989995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.281 [2024-07-13 08:04:48.997960] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.281 [2024-07-13 08:04:48.997983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.281 [2024-07-13 08:04:49.005970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.281 [2024-07-13 08:04:49.005993] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.013987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.014009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.022005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.022027] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.030031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.030053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.038079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.038123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.046075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.046098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.054094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.054116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.062117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.062138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.070154] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.070180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.074934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:57.538 [2024-07-13 08:04:49.078174] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.078195] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.086196] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.086217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.094257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.094293] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.102269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.102308] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.110276] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.110311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.118300] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.118338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.126319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.126355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.134341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.134379] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.142335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.142356] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.150383] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.150418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.158410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.158447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.166431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.166467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.174420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.174440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.182443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.182464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.190492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.190517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.198500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.198524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.206522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.206545] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.214543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.214566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.222563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.222599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.230584] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.230606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.238609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.238629] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.246632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.246653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.254653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.254673] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.262678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.262700] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.538 [2024-07-13 08:04:49.270731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.538 [2024-07-13 08:04:49.270755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.795 [2024-07-13 08:04:49.278723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.795 [2024-07-13 08:04:49.278746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.795 [2024-07-13 08:04:49.286743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.795 [2024-07-13 08:04:49.286764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.795 [2024-07-13 08:04:49.294784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.795 [2024-07-13 08:04:49.294808] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.795 [2024-07-13 08:04:49.302832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.795 [2024-07-13 08:04:49.302854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.795 Running I/O for 5 seconds... 00:17:57.795 [2024-07-13 08:04:49.310890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.795 [2024-07-13 08:04:49.310931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.795 [2024-07-13 08:04:49.324248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.795 [2024-07-13 08:04:49.324278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.795 [2024-07-13 08:04:49.334924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.795 [2024-07-13 08:04:49.334952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.795 [2024-07-13 08:04:49.345452] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.795 [2024-07-13 08:04:49.345488] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.795 [2024-07-13 08:04:49.358256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.795 [2024-07-13 08:04:49.358285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.795 [2024-07-13 08:04:49.368134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.795 [2024-07-13 08:04:49.368162] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.796 [2024-07-13 08:04:49.378553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.796 [2024-07-13 08:04:49.378580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.796 [2024-07-13 08:04:49.388957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.796 [2024-07-13 08:04:49.388986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.796 [2024-07-13 08:04:49.399496] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.796 [2024-07-13 08:04:49.399524] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.796 [2024-07-13 08:04:49.409825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.796 [2024-07-13 08:04:49.409852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.796 [2024-07-13 08:04:49.420349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.796 [2024-07-13 08:04:49.420377] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.796 [2024-07-13 08:04:49.432910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.796 [2024-07-13 08:04:49.432938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.796 [2024-07-13 08:04:49.443092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.796 [2024-07-13 08:04:49.443120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.796 [2024-07-13 08:04:49.453343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.796 [2024-07-13 08:04:49.453371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.796 [2024-07-13 08:04:49.463748] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.796 [2024-07-13 08:04:49.463775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.796 [2024-07-13 08:04:49.476794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.796 [2024-07-13 08:04:49.476822] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.796 [2024-07-13 08:04:49.486937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.796 [2024-07-13 08:04:49.486965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.796 [2024-07-13 08:04:49.496925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.796 [2024-07-13 08:04:49.496953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.796 [2024-07-13 08:04:49.507568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.796 [2024-07-13 08:04:49.507595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:57.796 [2024-07-13 08:04:49.520492] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:57.796 [2024-07-13 08:04:49.520520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.530646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.530674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.540809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.540837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.551811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.551845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.564448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.564476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.574692] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.574720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.585138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.585166] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.595572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.595599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.606071] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.606099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.616294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.616322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.626904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.626931] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.637197] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.637225] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.647697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.647725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.657927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.657954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.668313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.668341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.678784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.678813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.688846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.688884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.699146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.699174] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.709476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.709504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.719535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.719562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.729880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.729913] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.740537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.740566] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.751322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.751359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.762538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.762567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.052 [2024-07-13 08:04:49.773308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.052 [2024-07-13 08:04:49.773337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:49.786018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:49.786047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:49.796021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:49.796049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:49.806720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:49.806750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:49.817415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:49.817443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:49.828155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:49.828183] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:49.838948] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:49.838975] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:49.849568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:49.849595] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:49.860531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:49.860559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:49.871900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:49.871937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:49.885042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:49.885070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:49.895546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:49.895575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:49.906118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:49.906145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:49.916825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:49.916852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:49.927635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:49.927662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:49.940822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:49.940850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:49.950976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:49.951003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:49.961650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:49.961685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:49.972763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:49.972791] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:49.983599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:49.983627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:49.996740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:49.996785] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:50.007576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:50.007605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:50.019619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:50.019649] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:50.029977] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:50.030006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.310 [2024-07-13 08:04:50.040993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.310 [2024-07-13 08:04:50.041022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.051793] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.051837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.062796] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.062824] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.073680] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.073709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.086655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.086682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.097454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.097482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.108647] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.108676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.121457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.121484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.131799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.131827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.143239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.143270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.156290] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.156317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.167301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.167333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.178431] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.178462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.191896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.191940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.202678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.202709] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.213974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.214002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.226890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.226927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.237620] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.237651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.248675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.248705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.260091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.260120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.271242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.271273] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.282550] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.282580] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.568 [2024-07-13 08:04:50.293602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.568 [2024-07-13 08:04:50.293633] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.306666] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.306697] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.317091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.317119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.328786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.328816] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.340117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.340145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.351097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.351125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.362031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.362059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.373018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.373045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.384531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.384562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.395772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.395801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.407434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.407464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.418592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.418623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.429529] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.429560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.440560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.440590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.453379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.453408] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.463573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.463604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.474509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.474540] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.487641] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.487672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.497980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.498008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.509177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.509208] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.522236] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.522267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.532018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.532045] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.543764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.543794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.826 [2024-07-13 08:04:50.554985] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:58.826 [2024-07-13 08:04:50.555013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.568158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.568186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.578912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.578939] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.590275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.590306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.601522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.601553] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.612629] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.612660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.626059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.626086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.636595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.636626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.648265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.648296] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.659798] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.659830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.670974] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.671003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.682573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.682604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.693542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.693572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.704949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.704976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.716157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.716185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.727794] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.727840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.739527] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.739557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.751082] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.751110] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.762536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.762568] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.773649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.773679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.784553] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.784581] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.796578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.796606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.806157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.806186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.085 [2024-07-13 08:04:50.816925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.085 [2024-07-13 08:04:50.816959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.343 [2024-07-13 08:04:50.827448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.343 [2024-07-13 08:04:50.827475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.343 [2024-07-13 08:04:50.840286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.343 [2024-07-13 08:04:50.840314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.343 [2024-07-13 08:04:50.852403] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.343 [2024-07-13 08:04:50.852432] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.343 [2024-07-13 08:04:50.861465] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.344 [2024-07-13 08:04:50.861493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.344 [2024-07-13 08:04:50.873048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.344 [2024-07-13 08:04:50.873076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.344 [2024-07-13 08:04:50.883472] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.344 [2024-07-13 08:04:50.883501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.344 [2024-07-13 08:04:50.893991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.344 [2024-07-13 08:04:50.894020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.344 [2024-07-13 08:04:50.904269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.344 [2024-07-13 08:04:50.904299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.344 [2024-07-13 08:04:50.915007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.344 [2024-07-13 08:04:50.915035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.344 [2024-07-13 08:04:50.927804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.344 [2024-07-13 08:04:50.927832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.344 [2024-07-13 08:04:50.939561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.344 [2024-07-13 08:04:50.939589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.344 [2024-07-13 08:04:50.948657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.344 [2024-07-13 08:04:50.948684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.344 [2024-07-13 08:04:50.959920] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.344 [2024-07-13 08:04:50.959948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.344 [2024-07-13 08:04:50.970360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.344 [2024-07-13 08:04:50.970389] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.344 [2024-07-13 08:04:50.980574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.344 [2024-07-13 08:04:50.980602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.344 [2024-07-13 08:04:50.990859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.344 [2024-07-13 08:04:50.990900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.344 [2024-07-13 08:04:51.001222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.344 [2024-07-13 08:04:51.001251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.344 [2024-07-13 08:04:51.011582] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.344 [2024-07-13 08:04:51.011610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.344 [2024-07-13 08:04:51.021811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.344 [2024-07-13 08:04:51.021845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.344 [2024-07-13 08:04:51.032179] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.344 [2024-07-13 08:04:51.032207] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.344 [2024-07-13 08:04:51.042263] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.344 [2024-07-13 08:04:51.042290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.344 [2024-07-13 08:04:51.052724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.344 [2024-07-13 08:04:51.052752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.344 [2024-07-13 08:04:51.063408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.344 [2024-07-13 08:04:51.063436] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.344 [2024-07-13 08:04:51.074235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.344 [2024-07-13 08:04:51.074262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.602 [2024-07-13 08:04:51.086486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.602 [2024-07-13 08:04:51.086515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.602 [2024-07-13 08:04:51.096685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.602 [2024-07-13 08:04:51.096714] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.602 [2024-07-13 08:04:51.107273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.602 [2024-07-13 08:04:51.107300] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.602 [2024-07-13 08:04:51.117821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.602 [2024-07-13 08:04:51.117849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.602 [2024-07-13 08:04:51.128065] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.603 [2024-07-13 08:04:51.128093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.603 [2024-07-13 08:04:51.138497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.603 [2024-07-13 08:04:51.138525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.603 [2024-07-13 08:04:51.148925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.603 [2024-07-13 08:04:51.148962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.603 [2024-07-13 08:04:51.159220] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.603 [2024-07-13 08:04:51.159248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.603 [2024-07-13 08:04:51.169542] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.603 [2024-07-13 08:04:51.169570] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.603 [2024-07-13 08:04:51.180079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.603 [2024-07-13 08:04:51.180107] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.603 [2024-07-13 08:04:51.190540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.603 [2024-07-13 08:04:51.190567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.603 [2024-07-13 08:04:51.200925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.603 [2024-07-13 08:04:51.200953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.603 [2024-07-13 08:04:51.213388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.603 [2024-07-13 08:04:51.213416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.603 [2024-07-13 08:04:51.223319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.603 [2024-07-13 08:04:51.223355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.603 [2024-07-13 08:04:51.233698] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.603 [2024-07-13 08:04:51.233727] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.603 [2024-07-13 08:04:51.244086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.603 [2024-07-13 08:04:51.244114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.603 [2024-07-13 08:04:51.254842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.603 [2024-07-13 08:04:51.254877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.603 [2024-07-13 08:04:51.265424] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.603 [2024-07-13 08:04:51.265451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.603 [2024-07-13 08:04:51.278246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.603 [2024-07-13 08:04:51.278274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.603 [2024-07-13 08:04:51.288048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.603 [2024-07-13 08:04:51.288075] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.603 [2024-07-13 08:04:51.298423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.603 [2024-07-13 08:04:51.298450] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.603 [2024-07-13 08:04:51.311555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.603 [2024-07-13 08:04:51.311583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.603 [2024-07-13 08:04:51.321432] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.603 [2024-07-13 08:04:51.321460] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.603 [2024-07-13 08:04:51.331343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.603 [2024-07-13 08:04:51.331371] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.341240] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.341267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.351677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.351705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.362038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.362066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.372182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.372209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.382721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.382749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.395294] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.395322] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.405277] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.405305] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.415419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.415447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.425694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.425730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.436096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.436123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.446336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.446363] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.456679] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.456708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.467062] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.467090] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.477574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.477601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.488400] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.488427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.498841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.498880] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.509586] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.509614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.522231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.522258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.532987] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.533016] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.544285] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.544316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.555398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.555428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.567038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.567066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.578687] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.578718] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:59.862 [2024-07-13 08:04:51.591931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:59.862 [2024-07-13 08:04:51.591959] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.602295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.602326] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.613576] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.613606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.626806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.626837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.637860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.637904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.649055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.649082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.662076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.662104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.672517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.672547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.684076] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.684103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.695168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.695196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.706343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.706375] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.719119] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.719147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.729812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.729843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.741538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.741569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.752900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.752928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.764075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.764103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.775309] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.775336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.788574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.788605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.799528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.799559] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.811169] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.811197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.822596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.822627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.835863] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.835898] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.121 [2024-07-13 08:04:51.846821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.121 [2024-07-13 08:04:51.846852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:51.858280] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:51.858310] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:51.871715] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:51.871746] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:51.882407] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:51.882438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:51.894695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:51.894722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:51.905935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:51.905963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:51.918699] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:51.918730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:51.929045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:51.929072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:51.939975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:51.940003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:51.952518] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:51.952549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:51.962559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:51.962590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:51.974095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:51.974126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:51.985084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:51.985111] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:51.995804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:51.995832] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:52.008761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:52.008792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:52.019307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:52.019338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:52.030912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:52.030941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:52.042200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:52.042229] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:52.054041] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:52.054069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:52.065813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:52.065845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:52.077302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:52.077330] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:52.088327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:52.088359] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:52.099814] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:52.099845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.380 [2024-07-13 08:04:52.111516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.380 [2024-07-13 08:04:52.111547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.639 [2024-07-13 08:04:52.122512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.639 [2024-07-13 08:04:52.122544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.639 [2024-07-13 08:04:52.135854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.639 [2024-07-13 08:04:52.135892] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.639 [2024-07-13 08:04:52.146665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.639 [2024-07-13 08:04:52.146704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.639 [2024-07-13 08:04:52.157534] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.639 [2024-07-13 08:04:52.157564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.639 [2024-07-13 08:04:52.171101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.639 [2024-07-13 08:04:52.171130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.639 [2024-07-13 08:04:52.181469] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.639 [2024-07-13 08:04:52.181500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.639 [2024-07-13 08:04:52.193090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.639 [2024-07-13 08:04:52.193118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.639 [2024-07-13 08:04:52.204554] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.639 [2024-07-13 08:04:52.204584] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.639 [2024-07-13 08:04:52.217903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.639 [2024-07-13 08:04:52.217932] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.639 [2024-07-13 08:04:52.228351] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.639 [2024-07-13 08:04:52.228382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.639 [2024-07-13 08:04:52.239480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.639 [2024-07-13 08:04:52.239511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.639 [2024-07-13 08:04:52.252369] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.639 [2024-07-13 08:04:52.252398] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.639 [2024-07-13 08:04:52.262571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.639 [2024-07-13 08:04:52.262601] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.639 [2024-07-13 08:04:52.273756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.639 [2024-07-13 08:04:52.273786] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.639 [2024-07-13 08:04:52.284831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.639 [2024-07-13 08:04:52.284859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.640 [2024-07-13 08:04:52.296098] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.640 [2024-07-13 08:04:52.296126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.640 [2024-07-13 08:04:52.308980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.640 [2024-07-13 08:04:52.309009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.640 [2024-07-13 08:04:52.319007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.640 [2024-07-13 08:04:52.319035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.640 [2024-07-13 08:04:52.330325] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.640 [2024-07-13 08:04:52.330357] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.640 [2024-07-13 08:04:52.341751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.640 [2024-07-13 08:04:52.341781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.640 [2024-07-13 08:04:52.352899] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.640 [2024-07-13 08:04:52.352943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.640 [2024-07-13 08:04:52.363973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.640 [2024-07-13 08:04:52.364001] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.374827] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.374858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.386099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.386127] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.397526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.397557] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.408826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.408857] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.420152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.420180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.431330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.431362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.442234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.442262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.455269] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.455297] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.465480] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.465511] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.476784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.476815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.488093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.488121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.498961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.498996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.510313] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.510344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.521459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.521490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.532730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.532761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.544222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.544251] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.554927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.554955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.565762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.565789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.576416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.576447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.587672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.587702] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.600562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.600593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.610946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.610974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:00.898 [2024-07-13 08:04:52.621946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:00.898 [2024-07-13 08:04:52.621974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.633479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.633510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.644975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.645002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.655861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.655896] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.667127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.667154] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.678486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.678517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.689648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.689678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.700858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.700893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.711563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.711602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.722275] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.722303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.735413] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.735444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.745997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.746024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.756998] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.757026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.769799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.769830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.780106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.780133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.791910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.791938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.802842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.802877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.815976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.816003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.826881] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.826908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.837876] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.837903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.849219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.849249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.860428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.860459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.873563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.873594] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.157 [2024-07-13 08:04:52.883837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.157 [2024-07-13 08:04:52.883876] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.414 [2024-07-13 08:04:52.895105] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.414 [2024-07-13 08:04:52.895132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.414 [2024-07-13 08:04:52.906650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.414 [2024-07-13 08:04:52.906681] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.414 [2024-07-13 08:04:52.919555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.414 [2024-07-13 08:04:52.919586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.414 [2024-07-13 08:04:52.929636] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.414 [2024-07-13 08:04:52.929674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.414 [2024-07-13 08:04:52.940837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.414 [2024-07-13 08:04:52.940873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.414 [2024-07-13 08:04:52.953923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.414 [2024-07-13 08:04:52.953951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.414 [2024-07-13 08:04:52.964287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.414 [2024-07-13 08:04:52.964331] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.414 [2024-07-13 08:04:52.975892] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.414 [2024-07-13 08:04:52.975919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.415 [2024-07-13 08:04:52.987497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.415 [2024-07-13 08:04:52.987528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.415 [2024-07-13 08:04:52.998955] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.415 [2024-07-13 08:04:52.998983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.415 [2024-07-13 08:04:53.009925] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.415 [2024-07-13 08:04:53.009952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.415 [2024-07-13 08:04:53.022937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.415 [2024-07-13 08:04:53.022965] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.415 [2024-07-13 08:04:53.033108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.415 [2024-07-13 08:04:53.033135] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.415 [2024-07-13 08:04:53.045093] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.415 [2024-07-13 08:04:53.045121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.415 [2024-07-13 08:04:53.056118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.415 [2024-07-13 08:04:53.056145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.415 [2024-07-13 08:04:53.066890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.415 [2024-07-13 08:04:53.066917] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.415 [2024-07-13 08:04:53.078096] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.415 [2024-07-13 08:04:53.078124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.415 [2024-07-13 08:04:53.091026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.415 [2024-07-13 08:04:53.091053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.415 [2024-07-13 08:04:53.100968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.415 [2024-07-13 08:04:53.100995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.415 [2024-07-13 08:04:53.112261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.415 [2024-07-13 08:04:53.112292] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.415 [2024-07-13 08:04:53.125388] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.415 [2024-07-13 08:04:53.125416] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.415 [2024-07-13 08:04:53.135689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.415 [2024-07-13 08:04:53.135720] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.415 [2024-07-13 08:04:53.146847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.415 [2024-07-13 08:04:53.146891] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.159808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.159836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.170470] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.170501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.182153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.182181] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.193213] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.193241] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.204115] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.204144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.215260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.215288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.226392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.226425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.237393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.237424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.250360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.250391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.260516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.260544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.271211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.271239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.284039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.284067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.294488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.294516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.305170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.305199] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.315657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.315685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.326675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.326703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.337609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.337637] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.348278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.348306] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.359246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.359282] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.371986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.372013] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.382091] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.382119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.392693] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.392721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.672 [2024-07-13 08:04:53.403681] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.672 [2024-07-13 08:04:53.403708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.930 [2024-07-13 08:04:53.415256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.930 [2024-07-13 08:04:53.415283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.930 [2024-07-13 08:04:53.426451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.930 [2024-07-13 08:04:53.426479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.930 [2024-07-13 08:04:53.437003] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.930 [2024-07-13 08:04:53.437031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.930 [2024-07-13 08:04:53.447668] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.930 [2024-07-13 08:04:53.447696] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.930 [2024-07-13 08:04:53.458494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.930 [2024-07-13 08:04:53.458522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.930 [2024-07-13 08:04:53.469141] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.930 [2024-07-13 08:04:53.469168] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.930 [2024-07-13 08:04:53.479967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.930 [2024-07-13 08:04:53.479995] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.930 [2024-07-13 08:04:53.492850] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.930 [2024-07-13 08:04:53.492888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.931 [2024-07-13 08:04:53.503302] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.931 [2024-07-13 08:04:53.503329] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.931 [2024-07-13 08:04:53.514248] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.931 [2024-07-13 08:04:53.514275] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.931 [2024-07-13 08:04:53.526970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.931 [2024-07-13 08:04:53.527004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.931 [2024-07-13 08:04:53.536769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.931 [2024-07-13 08:04:53.536796] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.931 [2024-07-13 08:04:53.547562] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.931 [2024-07-13 08:04:53.547590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.931 [2024-07-13 08:04:53.560074] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.931 [2024-07-13 08:04:53.560101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.931 [2024-07-13 08:04:53.570398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.931 [2024-07-13 08:04:53.570426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.931 [2024-07-13 08:04:53.581447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.931 [2024-07-13 08:04:53.581475] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.931 [2024-07-13 08:04:53.594111] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.931 [2024-07-13 08:04:53.594139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.931 [2024-07-13 08:04:53.604230] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.931 [2024-07-13 08:04:53.604258] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.931 [2024-07-13 08:04:53.615116] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.931 [2024-07-13 08:04:53.615143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.931 [2024-07-13 08:04:53.627806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.931 [2024-07-13 08:04:53.627834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.931 [2024-07-13 08:04:53.638077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.931 [2024-07-13 08:04:53.638105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.931 [2024-07-13 08:04:53.649207] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.931 [2024-07-13 08:04:53.649235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:01.931 [2024-07-13 08:04:53.661805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:01.931 [2024-07-13 08:04:53.661833] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.189 [2024-07-13 08:04:53.671907] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.189 [2024-07-13 08:04:53.671935] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.189 [2024-07-13 08:04:53.682897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.682924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.190 [2024-07-13 08:04:53.695694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.695722] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.190 [2024-07-13 08:04:53.705677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.705704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.190 [2024-07-13 08:04:53.716342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.716370] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.190 [2024-07-13 08:04:53.727054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.727082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.190 [2024-07-13 08:04:53.738133] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.738161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.190 [2024-07-13 08:04:53.755722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.755752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.190 [2024-07-13 08:04:53.766202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.766230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.190 [2024-07-13 08:04:53.777205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.777233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.190 [2024-07-13 08:04:53.790189] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.790217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.190 [2024-07-13 08:04:53.800630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.800657] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.190 [2024-07-13 08:04:53.811297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.811325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.190 [2024-07-13 08:04:53.822327] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.822355] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.190 [2024-07-13 08:04:53.832900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.832927] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.190 [2024-07-13 08:04:53.843315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.843342] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.190 [2024-07-13 08:04:53.853992] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.854021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.190 [2024-07-13 08:04:53.865020] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.865048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.190 [2024-07-13 08:04:53.875669] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.875712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.190 [2024-07-13 08:04:53.885980] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.886008] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.190 [2024-07-13 08:04:53.896634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.896662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.190 [2024-07-13 08:04:53.907635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.907663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.190 [2024-07-13 08:04:53.920381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.190 [2024-07-13 08:04:53.920409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.449 [2024-07-13 08:04:53.930434] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.449 [2024-07-13 08:04:53.930462] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.449 [2024-07-13 08:04:53.941257] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.449 [2024-07-13 08:04:53.941285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.449 [2024-07-13 08:04:53.952138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.449 [2024-07-13 08:04:53.952165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.449 [2024-07-13 08:04:53.965015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.449 [2024-07-13 08:04:53.965043] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.449 [2024-07-13 08:04:53.975503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.449 [2024-07-13 08:04:53.975531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.449 [2024-07-13 08:04:53.986613] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.449 [2024-07-13 08:04:53.986655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.449 [2024-07-13 08:04:53.997229] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.449 [2024-07-13 08:04:53.997257] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.449 [2024-07-13 08:04:54.008723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.449 [2024-07-13 08:04:54.008752] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.449 [2024-07-13 08:04:54.019036] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.449 [2024-07-13 08:04:54.019065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.449 [2024-07-13 08:04:54.029806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.449 [2024-07-13 08:04:54.029834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.449 [2024-07-13 08:04:54.042736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.449 [2024-07-13 08:04:54.042763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.449 [2024-07-13 08:04:54.053212] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.449 [2024-07-13 08:04:54.053239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.449 [2024-07-13 08:04:54.063851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.449 [2024-07-13 08:04:54.063886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.449 [2024-07-13 08:04:54.074654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.449 [2024-07-13 08:04:54.074682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.450 [2024-07-13 08:04:54.087250] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.450 [2024-07-13 08:04:54.087278] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.450 [2024-07-13 08:04:54.096970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.450 [2024-07-13 08:04:54.096997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.450 [2024-07-13 08:04:54.108231] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.450 [2024-07-13 08:04:54.108259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.450 [2024-07-13 08:04:54.119205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.450 [2024-07-13 08:04:54.119232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.450 [2024-07-13 08:04:54.130060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.450 [2024-07-13 08:04:54.130087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.450 [2024-07-13 08:04:54.142970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.450 [2024-07-13 08:04:54.142998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.450 [2024-07-13 08:04:54.153146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.450 [2024-07-13 08:04:54.153173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.450 [2024-07-13 08:04:54.163975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.450 [2024-07-13 08:04:54.164003] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.450 [2024-07-13 08:04:54.176838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.450 [2024-07-13 08:04:54.176873] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.709 [2024-07-13 08:04:54.186953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.709 [2024-07-13 08:04:54.186980] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.709 [2024-07-13 08:04:54.197778] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.709 [2024-07-13 08:04:54.197813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.709 [2024-07-13 08:04:54.210685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.709 [2024-07-13 08:04:54.210717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.709 [2024-07-13 08:04:54.221575] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.709 [2024-07-13 08:04:54.221607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.709 [2024-07-13 08:04:54.232287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.709 [2024-07-13 08:04:54.232314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.709 [2024-07-13 08:04:54.244768] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.709 [2024-07-13 08:04:54.244795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.709 [2024-07-13 08:04:54.254378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.709 [2024-07-13 08:04:54.254406] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.709 [2024-07-13 08:04:54.265532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.709 [2024-07-13 08:04:54.265560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.709 [2024-07-13 08:04:54.275847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.709 [2024-07-13 08:04:54.275885] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.709 [2024-07-13 08:04:54.286320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.709 [2024-07-13 08:04:54.286349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.709 [2024-07-13 08:04:54.297049] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.709 [2024-07-13 08:04:54.297076] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.709 [2024-07-13 08:04:54.309799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.709 [2024-07-13 08:04:54.309827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.709 [2024-07-13 08:04:54.320103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.709 [2024-07-13 08:04:54.320130] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.709 [2024-07-13 08:04:54.326396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.709 [2024-07-13 08:04:54.326423] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.709 00:18:02.709 Latency(us) 00:18:02.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.709 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:02.709 Nvme1n1 : 5.01 11638.97 90.93 0.00 0.00 10982.97 4563.25 19320.98 00:18:02.709 =================================================================================================================== 00:18:02.709 Total : 11638.97 90.93 0.00 0.00 10982.97 4563.25 19320.98 00:18:02.709 [2024-07-13 08:04:54.334326] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.710 [2024-07-13 08:04:54.334352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.710 [2024-07-13 08:04:54.342378] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.710 [2024-07-13 08:04:54.342402] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.710 [2024-07-13 08:04:54.350459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.710 [2024-07-13 08:04:54.350505] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.710 [2024-07-13 08:04:54.358479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.710 [2024-07-13 08:04:54.358536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.710 [2024-07-13 08:04:54.366491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.710 [2024-07-13 08:04:54.366535] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.710 [2024-07-13 08:04:54.374517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.710 [2024-07-13 08:04:54.374560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.710 [2024-07-13 08:04:54.382540] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.710 [2024-07-13 08:04:54.382585] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.710 [2024-07-13 08:04:54.390568] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.710 [2024-07-13 08:04:54.390611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.710 [2024-07-13 08:04:54.398585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.710 [2024-07-13 08:04:54.398627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.710 [2024-07-13 08:04:54.406610] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.710 [2024-07-13 08:04:54.406656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.710 [2024-07-13 08:04:54.414627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.710 [2024-07-13 08:04:54.414671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.710 [2024-07-13 08:04:54.422655] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.710 [2024-07-13 08:04:54.422701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.710 [2024-07-13 08:04:54.430682] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.710 [2024-07-13 08:04:54.430725] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.710 [2024-07-13 08:04:54.438703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.710 [2024-07-13 08:04:54.438750] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.969 [2024-07-13 08:04:54.446716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.969 [2024-07-13 08:04:54.446757] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.969 [2024-07-13 08:04:54.454721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.969 [2024-07-13 08:04:54.454764] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.969 [2024-07-13 08:04:54.462714] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.969 [2024-07-13 08:04:54.462738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.969 [2024-07-13 08:04:54.470775] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.969 [2024-07-13 08:04:54.470818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.969 [2024-07-13 08:04:54.478797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.969 [2024-07-13 08:04:54.478838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.969 [2024-07-13 08:04:54.486824] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.969 [2024-07-13 08:04:54.486881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.969 [2024-07-13 08:04:54.494802] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.969 [2024-07-13 08:04:54.494828] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.969 [2024-07-13 08:04:54.502834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.969 [2024-07-13 08:04:54.502864] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.969 [2024-07-13 08:04:54.510903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.969 [2024-07-13 08:04:54.510960] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.969 [2024-07-13 08:04:54.518917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.969 [2024-07-13 08:04:54.518964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.969 [2024-07-13 08:04:54.526901] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.969 [2024-07-13 08:04:54.526924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.969 [2024-07-13 08:04:54.534918] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.969 [2024-07-13 08:04:54.534940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.969 [2024-07-13 08:04:54.542942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:02.969 [2024-07-13 08:04:54.542964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:02.969 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1951731) - No such process 00:18:02.969 08:04:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1951731 00:18:02.969 08:04:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:02.969 08:04:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.969 08:04:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:02.969 08:04:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.969 08:04:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:02.969 08:04:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.969 08:04:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:02.969 delay0 00:18:02.969 08:04:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.969 08:04:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:02.969 08:04:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.969 08:04:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:02.969 08:04:54 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.969 08:04:54 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:02.969 EAL: No free 2048 kB hugepages reported on node 1 00:18:02.969 [2024-07-13 08:04:54.697990] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:09.534 Initializing NVMe Controllers 00:18:09.534 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:09.534 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:09.534 Initialization complete. Launching workers. 00:18:09.534 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 808 00:18:09.534 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1082, failed to submit 46 00:18:09.534 success 933, unsuccess 149, failed 0 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:09.534 rmmod nvme_tcp 00:18:09.534 rmmod nvme_fabrics 00:18:09.534 rmmod nvme_keyring 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1950399 ']' 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1950399 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1950399 ']' 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1950399 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1950399 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1950399' 00:18:09.534 killing process with pid 1950399 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1950399 00:18:09.534 08:05:00 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1950399 00:18:09.534 08:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:09.534 08:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:09.534 08:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:09.534 08:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:09.534 08:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:09.534 08:05:01 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.534 08:05:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.534 08:05:01 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.094 08:05:03 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:12.094 00:18:12.094 real 0m27.711s 00:18:12.094 user 0m40.509s 00:18:12.094 sys 0m8.484s 00:18:12.094 08:05:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:12.094 08:05:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:12.094 ************************************ 00:18:12.094 END TEST nvmf_zcopy 00:18:12.094 ************************************ 00:18:12.094 08:05:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:12.094 08:05:03 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:12.094 08:05:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:12.094 08:05:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:12.094 08:05:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:12.094 ************************************ 00:18:12.094 START TEST nvmf_nmic 00:18:12.094 ************************************ 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:12.094 * Looking for test storage... 00:18:12.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:12.094 08:05:03 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:13.997 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:13.998 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:13.998 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:13.998 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:13.998 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:13.998 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:13.998 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:18:13.998 00:18:13.998 --- 10.0.0.2 ping statistics --- 00:18:13.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.998 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:13.998 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:13.998 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:18:13.998 00:18:13.998 --- 10.0.0.1 ping statistics --- 00:18:13.998 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.998 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1954990 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1954990 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1954990 ']' 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:13.998 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:13.998 [2024-07-13 08:05:05.612885] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:13.998 [2024-07-13 08:05:05.612986] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.998 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.998 [2024-07-13 08:05:05.684276] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:14.257 [2024-07-13 08:05:05.782359] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:14.257 [2024-07-13 08:05:05.782431] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:14.257 [2024-07-13 08:05:05.782448] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:14.257 [2024-07-13 08:05:05.782462] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:14.257 [2024-07-13 08:05:05.782475] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:14.257 [2024-07-13 08:05:05.782570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:14.257 [2024-07-13 08:05:05.785900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:14.257 [2024-07-13 08:05:05.785946] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:14.257 [2024-07-13 08:05:05.785941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:14.257 [2024-07-13 08:05:05.937703] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:14.257 Malloc0 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.257 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:14.257 [2024-07-13 08:05:05.989000] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.515 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.515 08:05:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:14.515 test case1: single bdev can't be used in multiple subsystems 00:18:14.515 08:05:05 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:14.515 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.515 08:05:05 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:14.515 08:05:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.515 08:05:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:14.515 08:05:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.515 08:05:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:14.515 08:05:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.515 08:05:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:14.515 08:05:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:14.515 08:05:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.515 08:05:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:14.515 [2024-07-13 08:05:06.012845] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:14.515 [2024-07-13 08:05:06.012898] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:14.515 [2024-07-13 08:05:06.012928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:14.515 request: 00:18:14.515 { 00:18:14.515 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:14.515 "namespace": { 00:18:14.515 "bdev_name": "Malloc0", 00:18:14.515 "no_auto_visible": false 00:18:14.515 }, 00:18:14.515 "method": "nvmf_subsystem_add_ns", 00:18:14.515 "req_id": 1 00:18:14.515 } 00:18:14.515 Got JSON-RPC error response 00:18:14.515 response: 00:18:14.515 { 00:18:14.515 "code": -32602, 00:18:14.515 "message": "Invalid parameters" 00:18:14.515 } 00:18:14.515 08:05:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:14.515 08:05:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:14.515 08:05:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:14.515 08:05:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:14.515 Adding namespace failed - expected result. 00:18:14.515 08:05:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:14.515 test case2: host connect to nvmf target in multiple paths 00:18:14.515 08:05:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:14.515 08:05:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.515 08:05:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:14.515 [2024-07-13 08:05:06.020982] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:14.515 08:05:06 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.515 08:05:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:15.080 08:05:06 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:16.012 08:05:07 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:16.012 08:05:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:18:16.012 08:05:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:16.012 08:05:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:16.012 08:05:07 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:18:17.907 08:05:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:17.907 08:05:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:17.907 08:05:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:17.907 08:05:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:17.907 08:05:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:17.907 08:05:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:18:17.907 08:05:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:17.907 [global] 00:18:17.907 thread=1 00:18:17.907 invalidate=1 00:18:17.907 rw=write 00:18:17.907 time_based=1 00:18:17.907 runtime=1 00:18:17.907 ioengine=libaio 00:18:17.907 direct=1 00:18:17.907 bs=4096 00:18:17.907 iodepth=1 00:18:17.907 norandommap=0 00:18:17.907 numjobs=1 00:18:17.907 00:18:17.907 verify_dump=1 00:18:17.907 verify_backlog=512 00:18:17.907 verify_state_save=0 00:18:17.907 do_verify=1 00:18:17.907 verify=crc32c-intel 00:18:17.907 [job0] 00:18:17.907 filename=/dev/nvme0n1 00:18:17.907 Could not set queue depth (nvme0n1) 00:18:17.907 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:17.907 fio-3.35 00:18:17.907 Starting 1 thread 00:18:19.276 00:18:19.276 job0: (groupid=0, jobs=1): err= 0: pid=1955626: Sat Jul 13 08:05:10 2024 00:18:19.276 read: IOPS=26, BW=107KiB/s (110kB/s)(108KiB/1006msec) 00:18:19.276 slat (nsec): min=14836, max=42578, avg=25581.00, stdev=8857.97 00:18:19.276 clat (usec): min=289, max=41373, avg=31935.85, stdev=17211.98 00:18:19.276 lat (usec): min=306, max=41393, avg=31961.44, stdev=17214.36 00:18:19.276 clat percentiles (usec): 00:18:19.276 | 1.00th=[ 289], 5.00th=[ 318], 10.00th=[ 318], 20.00th=[ 396], 00:18:19.276 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:19.276 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:19.276 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:19.276 | 99.99th=[41157] 00:18:19.276 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:18:19.276 slat (usec): min=6, max=32249, avg=81.48, stdev=1424.43 00:18:19.276 clat (usec): min=158, max=344, avg=193.88, stdev=23.00 00:18:19.276 lat (usec): min=171, max=32541, avg=275.36, stdev=1428.99 00:18:19.276 clat percentiles (usec): 00:18:19.276 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 180], 00:18:19.276 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 192], 00:18:19.276 | 70.00th=[ 196], 80.00th=[ 198], 90.00th=[ 225], 95.00th=[ 249], 00:18:19.276 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 347], 99.95th=[ 347], 00:18:19.276 | 99.99th=[ 347] 00:18:19.276 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:19.276 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:19.276 lat (usec) : 250=90.91%, 500=5.19% 00:18:19.276 lat (msec) : 50=3.90% 00:18:19.276 cpu : usr=0.40%, sys=0.90%, ctx=542, majf=0, minf=2 00:18:19.276 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:19.276 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.276 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:19.276 issued rwts: total=27,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:19.276 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:19.276 00:18:19.276 Run status group 0 (all jobs): 00:18:19.276 READ: bw=107KiB/s (110kB/s), 107KiB/s-107KiB/s (110kB/s-110kB/s), io=108KiB (111kB), run=1006-1006msec 00:18:19.276 WRITE: bw=2036KiB/s (2085kB/s), 2036KiB/s-2036KiB/s (2085kB/s-2085kB/s), io=2048KiB (2097kB), run=1006-1006msec 00:18:19.276 00:18:19.276 Disk stats (read/write): 00:18:19.276 nvme0n1: ios=48/512, merge=0/0, ticks=1704/99, in_queue=1803, util=98.90% 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:19.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:19.276 rmmod nvme_tcp 00:18:19.276 rmmod nvme_fabrics 00:18:19.276 rmmod nvme_keyring 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1954990 ']' 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1954990 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1954990 ']' 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1954990 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1954990 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1954990' 00:18:19.276 killing process with pid 1954990 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1954990 00:18:19.276 08:05:10 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1954990 00:18:19.533 08:05:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:19.533 08:05:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:19.533 08:05:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:19.533 08:05:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:19.533 08:05:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:19.533 08:05:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:19.533 08:05:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:19.533 08:05:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.081 08:05:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:22.081 00:18:22.081 real 0m9.901s 00:18:22.081 user 0m22.564s 00:18:22.081 sys 0m2.302s 00:18:22.081 08:05:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:22.081 08:05:13 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:22.081 ************************************ 00:18:22.081 END TEST nvmf_nmic 00:18:22.081 ************************************ 00:18:22.081 08:05:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:22.081 08:05:13 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:22.081 08:05:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:22.081 08:05:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:22.081 08:05:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:22.081 ************************************ 00:18:22.081 START TEST nvmf_fio_target 00:18:22.081 ************************************ 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:22.081 * Looking for test storage... 00:18:22.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:22.081 08:05:13 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:23.983 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:23.983 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:23.983 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:23.983 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:23.983 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:23.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:23.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:18:23.984 00:18:23.984 --- 10.0.0.2 ping statistics --- 00:18:23.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.984 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:23.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:23.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:18:23.984 00:18:23.984 --- 10.0.0.1 ping statistics --- 00:18:23.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:23.984 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1957703 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1957703 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1957703 ']' 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:23.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:23.984 08:05:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.984 [2024-07-13 08:05:15.657473] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:23.984 [2024-07-13 08:05:15.657561] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:23.984 EAL: No free 2048 kB hugepages reported on node 1 00:18:24.242 [2024-07-13 08:05:15.727745] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:24.242 [2024-07-13 08:05:15.823414] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.242 [2024-07-13 08:05:15.823470] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.242 [2024-07-13 08:05:15.823497] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.242 [2024-07-13 08:05:15.823510] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.242 [2024-07-13 08:05:15.823522] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.242 [2024-07-13 08:05:15.823604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.242 [2024-07-13 08:05:15.823658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:24.242 [2024-07-13 08:05:15.823713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:24.242 [2024-07-13 08:05:15.823715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.242 08:05:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:24.242 08:05:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:18:24.242 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:24.242 08:05:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:24.242 08:05:15 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.242 08:05:15 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.242 08:05:15 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:24.499 [2024-07-13 08:05:16.195308] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:24.499 08:05:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:24.757 08:05:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:24.757 08:05:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:25.324 08:05:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:25.324 08:05:16 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:25.324 08:05:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:25.324 08:05:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:25.583 08:05:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:25.583 08:05:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:25.848 08:05:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:26.106 08:05:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:26.106 08:05:17 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:26.364 08:05:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:26.364 08:05:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:26.622 08:05:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:26.622 08:05:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:26.880 08:05:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:27.137 08:05:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:27.137 08:05:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:27.395 08:05:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:27.395 08:05:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:27.652 08:05:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:27.909 [2024-07-13 08:05:19.513025] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.909 08:05:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:28.167 08:05:19 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:28.425 08:05:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:28.990 08:05:20 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:28.990 08:05:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:18:28.990 08:05:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:28.990 08:05:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:18:28.990 08:05:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:18:28.990 08:05:20 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:18:30.889 08:05:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:30.889 08:05:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:30.889 08:05:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:30.889 08:05:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:18:30.889 08:05:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:30.889 08:05:22 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:18:30.889 08:05:22 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:31.147 [global] 00:18:31.147 thread=1 00:18:31.147 invalidate=1 00:18:31.147 rw=write 00:18:31.147 time_based=1 00:18:31.147 runtime=1 00:18:31.147 ioengine=libaio 00:18:31.147 direct=1 00:18:31.147 bs=4096 00:18:31.147 iodepth=1 00:18:31.147 norandommap=0 00:18:31.147 numjobs=1 00:18:31.147 00:18:31.147 verify_dump=1 00:18:31.147 verify_backlog=512 00:18:31.147 verify_state_save=0 00:18:31.147 do_verify=1 00:18:31.147 verify=crc32c-intel 00:18:31.147 [job0] 00:18:31.147 filename=/dev/nvme0n1 00:18:31.147 [job1] 00:18:31.147 filename=/dev/nvme0n2 00:18:31.147 [job2] 00:18:31.147 filename=/dev/nvme0n3 00:18:31.147 [job3] 00:18:31.147 filename=/dev/nvme0n4 00:18:31.147 Could not set queue depth (nvme0n1) 00:18:31.147 Could not set queue depth (nvme0n2) 00:18:31.147 Could not set queue depth (nvme0n3) 00:18:31.147 Could not set queue depth (nvme0n4) 00:18:31.147 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:31.147 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:31.147 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:31.147 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:31.147 fio-3.35 00:18:31.147 Starting 4 threads 00:18:32.520 00:18:32.520 job0: (groupid=0, jobs=1): err= 0: pid=1958713: Sat Jul 13 08:05:24 2024 00:18:32.520 read: IOPS=535, BW=2144KiB/s (2195kB/s)(2148KiB/1002msec) 00:18:32.520 slat (nsec): min=5267, max=54918, avg=22236.49, stdev=11215.93 00:18:32.520 clat (usec): min=266, max=42018, avg=1373.32, stdev=6323.59 00:18:32.520 lat (usec): min=273, max=42034, avg=1395.56, stdev=6322.50 00:18:32.520 clat percentiles (usec): 00:18:32.520 | 1.00th=[ 289], 5.00th=[ 322], 10.00th=[ 326], 20.00th=[ 334], 00:18:32.520 | 30.00th=[ 347], 40.00th=[ 359], 50.00th=[ 383], 60.00th=[ 392], 00:18:32.520 | 70.00th=[ 400], 80.00th=[ 404], 90.00th=[ 437], 95.00th=[ 519], 00:18:32.520 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:32.520 | 99.99th=[42206] 00:18:32.520 write: IOPS=1021, BW=4088KiB/s (4186kB/s)(4096KiB/1002msec); 0 zone resets 00:18:32.520 slat (nsec): min=6716, max=72650, avg=13989.31, stdev=7746.84 00:18:32.520 clat (usec): min=169, max=1248, avg=225.44, stdev=60.32 00:18:32.520 lat (usec): min=178, max=1256, avg=239.43, stdev=62.29 00:18:32.520 clat percentiles (usec): 00:18:32.520 | 1.00th=[ 176], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:18:32.520 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 208], 60.00th=[ 221], 00:18:32.520 | 70.00th=[ 235], 80.00th=[ 255], 90.00th=[ 281], 95.00th=[ 326], 00:18:32.520 | 99.00th=[ 392], 99.50th=[ 408], 99.90th=[ 799], 99.95th=[ 1254], 00:18:32.520 | 99.99th=[ 1254] 00:18:32.520 bw ( KiB/s): min= 8192, max= 8192, per=44.67%, avg=8192.00, stdev= 0.00, samples=1 00:18:32.520 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:32.520 lat (usec) : 250=50.99%, 500=46.57%, 750=1.47%, 1000=0.06% 00:18:32.520 lat (msec) : 2=0.06%, 50=0.83% 00:18:32.520 cpu : usr=1.50%, sys=2.50%, ctx=1562, majf=0, minf=1 00:18:32.520 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:32.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.520 issued rwts: total=537,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.520 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:32.520 job1: (groupid=0, jobs=1): err= 0: pid=1958730: Sat Jul 13 08:05:24 2024 00:18:32.520 read: IOPS=1654, BW=6617KiB/s (6776kB/s)(6624KiB/1001msec) 00:18:32.520 slat (nsec): min=4522, max=62104, avg=14390.56, stdev=7804.71 00:18:32.520 clat (usec): min=249, max=1620, avg=317.51, stdev=56.44 00:18:32.520 lat (usec): min=257, max=1625, avg=331.90, stdev=56.18 00:18:32.520 clat percentiles (usec): 00:18:32.520 | 1.00th=[ 255], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 277], 00:18:32.520 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 302], 60.00th=[ 318], 00:18:32.520 | 70.00th=[ 334], 80.00th=[ 375], 90.00th=[ 383], 95.00th=[ 396], 00:18:32.520 | 99.00th=[ 445], 99.50th=[ 453], 99.90th=[ 553], 99.95th=[ 1614], 00:18:32.520 | 99.99th=[ 1614] 00:18:32.520 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:32.520 slat (nsec): min=5933, max=50374, avg=11275.45, stdev=5423.17 00:18:32.520 clat (usec): min=160, max=444, avg=202.40, stdev=24.73 00:18:32.520 lat (usec): min=167, max=451, avg=213.68, stdev=25.73 00:18:32.520 clat percentiles (usec): 00:18:32.520 | 1.00th=[ 169], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 182], 00:18:32.520 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:18:32.520 | 70.00th=[ 212], 80.00th=[ 223], 90.00th=[ 235], 95.00th=[ 247], 00:18:32.520 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 379], 99.95th=[ 392], 00:18:32.520 | 99.99th=[ 445] 00:18:32.520 bw ( KiB/s): min= 8192, max= 8192, per=44.67%, avg=8192.00, stdev= 0.00, samples=1 00:18:32.520 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:32.520 lat (usec) : 250=53.08%, 500=46.87%, 750=0.03% 00:18:32.520 lat (msec) : 2=0.03% 00:18:32.520 cpu : usr=2.60%, sys=4.80%, ctx=3706, majf=0, minf=2 00:18:32.520 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:32.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.520 issued rwts: total=1656,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.520 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:32.520 job2: (groupid=0, jobs=1): err= 0: pid=1958770: Sat Jul 13 08:05:24 2024 00:18:32.520 read: IOPS=937, BW=3748KiB/s (3838kB/s)(3752KiB/1001msec) 00:18:32.520 slat (nsec): min=4868, max=41555, avg=11591.27, stdev=5040.02 00:18:32.520 clat (usec): min=256, max=42360, avg=760.49, stdev=3999.69 00:18:32.520 lat (usec): min=266, max=42374, avg=772.08, stdev=4000.69 00:18:32.520 clat percentiles (usec): 00:18:32.520 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 297], 00:18:32.520 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 334], 60.00th=[ 379], 00:18:32.520 | 70.00th=[ 383], 80.00th=[ 412], 90.00th=[ 494], 95.00th=[ 502], 00:18:32.520 | 99.00th=[10159], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:18:32.520 | 99.99th=[42206] 00:18:32.520 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:18:32.520 slat (nsec): min=6145, max=74666, avg=13485.00, stdev=10058.81 00:18:32.520 clat (usec): min=165, max=2362, avg=249.39, stdev=100.80 00:18:32.520 lat (usec): min=172, max=2402, avg=262.88, stdev=105.52 00:18:32.520 clat percentiles (usec): 00:18:32.520 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:18:32.520 | 30.00th=[ 196], 40.00th=[ 206], 50.00th=[ 229], 60.00th=[ 249], 00:18:32.520 | 70.00th=[ 260], 80.00th=[ 281], 90.00th=[ 363], 95.00th=[ 404], 00:18:32.520 | 99.00th=[ 490], 99.50th=[ 570], 99.90th=[ 930], 99.95th=[ 2376], 00:18:32.520 | 99.99th=[ 2376] 00:18:32.520 bw ( KiB/s): min= 4096, max= 4096, per=22.33%, avg=4096.00, stdev= 0.00, samples=1 00:18:32.520 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:32.520 lat (usec) : 250=32.21%, 500=64.42%, 750=2.75%, 1000=0.05% 00:18:32.520 lat (msec) : 4=0.05%, 20=0.05%, 50=0.46% 00:18:32.520 cpu : usr=1.60%, sys=2.20%, ctx=1964, majf=0, minf=1 00:18:32.520 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:32.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.520 issued rwts: total=938,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.520 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:32.520 job3: (groupid=0, jobs=1): err= 0: pid=1958772: Sat Jul 13 08:05:24 2024 00:18:32.520 read: IOPS=97, BW=390KiB/s (399kB/s)(392KiB/1005msec) 00:18:32.520 slat (nsec): min=5382, max=33245, avg=14136.49, stdev=5873.45 00:18:32.520 clat (usec): min=279, max=41112, avg=8624.92, stdev=16465.81 00:18:32.520 lat (usec): min=286, max=41125, avg=8639.05, stdev=16467.23 00:18:32.520 clat percentiles (usec): 00:18:32.520 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 289], 20.00th=[ 297], 00:18:32.520 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 334], 00:18:32.520 | 70.00th=[ 347], 80.00th=[40633], 90.00th=[41157], 95.00th=[41157], 00:18:32.520 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:32.520 | 99.99th=[41157] 00:18:32.520 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:18:32.520 slat (nsec): min=7001, max=61394, avg=17337.16, stdev=8301.58 00:18:32.520 clat (usec): min=236, max=765, avg=287.01, stdev=55.61 00:18:32.520 lat (usec): min=247, max=804, avg=304.35, stdev=58.78 00:18:32.520 clat percentiles (usec): 00:18:32.520 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 251], 20.00th=[ 258], 00:18:32.520 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:18:32.520 | 70.00th=[ 285], 80.00th=[ 306], 90.00th=[ 347], 95.00th=[ 396], 00:18:32.520 | 99.00th=[ 529], 99.50th=[ 611], 99.90th=[ 766], 99.95th=[ 766], 00:18:32.520 | 99.99th=[ 766] 00:18:32.520 bw ( KiB/s): min= 4096, max= 4096, per=22.33%, avg=4096.00, stdev= 0.00, samples=1 00:18:32.520 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:32.520 lat (usec) : 250=7.21%, 500=88.20%, 750=1.15%, 1000=0.16% 00:18:32.520 lat (msec) : 50=3.28% 00:18:32.520 cpu : usr=0.40%, sys=1.10%, ctx=610, majf=0, minf=1 00:18:32.520 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:32.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:32.520 issued rwts: total=98,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:32.520 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:32.520 00:18:32.520 Run status group 0 (all jobs): 00:18:32.520 READ: bw=12.5MiB/s (13.2MB/s), 390KiB/s-6617KiB/s (399kB/s-6776kB/s), io=12.6MiB (13.2MB), run=1001-1005msec 00:18:32.520 WRITE: bw=17.9MiB/s (18.8MB/s), 2038KiB/s-8184KiB/s (2087kB/s-8380kB/s), io=18.0MiB (18.9MB), run=1001-1005msec 00:18:32.520 00:18:32.520 Disk stats (read/write): 00:18:32.520 nvme0n1: ios=574/1024, merge=0/0, ticks=727/217, in_queue=944, util=85.07% 00:18:32.520 nvme0n2: ios=1546/1536, merge=0/0, ticks=1384/299, in_queue=1683, util=89.00% 00:18:32.520 nvme0n3: ios=569/976, merge=0/0, ticks=1089/232, in_queue=1321, util=93.07% 00:18:32.520 nvme0n4: ios=151/512, merge=0/0, ticks=767/140, in_queue=907, util=96.08% 00:18:32.520 08:05:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:32.520 [global] 00:18:32.520 thread=1 00:18:32.520 invalidate=1 00:18:32.520 rw=randwrite 00:18:32.520 time_based=1 00:18:32.520 runtime=1 00:18:32.520 ioengine=libaio 00:18:32.520 direct=1 00:18:32.520 bs=4096 00:18:32.520 iodepth=1 00:18:32.520 norandommap=0 00:18:32.520 numjobs=1 00:18:32.520 00:18:32.520 verify_dump=1 00:18:32.520 verify_backlog=512 00:18:32.520 verify_state_save=0 00:18:32.520 do_verify=1 00:18:32.520 verify=crc32c-intel 00:18:32.520 [job0] 00:18:32.520 filename=/dev/nvme0n1 00:18:32.520 [job1] 00:18:32.520 filename=/dev/nvme0n2 00:18:32.521 [job2] 00:18:32.521 filename=/dev/nvme0n3 00:18:32.521 [job3] 00:18:32.521 filename=/dev/nvme0n4 00:18:32.521 Could not set queue depth (nvme0n1) 00:18:32.521 Could not set queue depth (nvme0n2) 00:18:32.521 Could not set queue depth (nvme0n3) 00:18:32.521 Could not set queue depth (nvme0n4) 00:18:32.778 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:32.778 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:32.778 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:32.778 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:32.778 fio-3.35 00:18:32.778 Starting 4 threads 00:18:34.151 00:18:34.151 job0: (groupid=0, jobs=1): err= 0: pid=1958998: Sat Jul 13 08:05:25 2024 00:18:34.151 read: IOPS=1532, BW=6130KiB/s (6277kB/s)(6136KiB/1001msec) 00:18:34.151 slat (nsec): min=6774, max=55456, avg=12366.48, stdev=6155.60 00:18:34.151 clat (usec): min=267, max=766, avg=370.79, stdev=68.00 00:18:34.151 lat (usec): min=276, max=776, avg=383.16, stdev=69.86 00:18:34.151 clat percentiles (usec): 00:18:34.151 | 1.00th=[ 277], 5.00th=[ 285], 10.00th=[ 293], 20.00th=[ 322], 00:18:34.151 | 30.00th=[ 343], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 363], 00:18:34.151 | 70.00th=[ 375], 80.00th=[ 412], 90.00th=[ 469], 95.00th=[ 510], 00:18:34.151 | 99.00th=[ 603], 99.50th=[ 635], 99.90th=[ 668], 99.95th=[ 766], 00:18:34.151 | 99.99th=[ 766] 00:18:34.151 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:18:34.151 slat (nsec): min=8253, max=49735, avg=15741.24, stdev=6707.40 00:18:34.151 clat (usec): min=179, max=669, avg=244.30, stdev=75.54 00:18:34.151 lat (usec): min=187, max=688, avg=260.04, stdev=77.11 00:18:34.151 clat percentiles (usec): 00:18:34.151 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:18:34.151 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 225], 60.00th=[ 229], 00:18:34.151 | 70.00th=[ 237], 80.00th=[ 260], 90.00th=[ 302], 95.00th=[ 445], 00:18:34.151 | 99.00th=[ 562], 99.50th=[ 619], 99.90th=[ 660], 99.95th=[ 668], 00:18:34.151 | 99.99th=[ 668] 00:18:34.151 bw ( KiB/s): min= 8064, max= 8064, per=57.99%, avg=8064.00, stdev= 0.00, samples=1 00:18:34.151 iops : min= 2016, max= 2016, avg=2016.00, stdev= 0.00, samples=1 00:18:34.151 lat (usec) : 250=38.66%, 500=56.74%, 750=4.56%, 1000=0.03% 00:18:34.151 cpu : usr=3.00%, sys=6.40%, ctx=3070, majf=0, minf=2 00:18:34.151 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.151 issued rwts: total=1534,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.151 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:34.151 job1: (groupid=0, jobs=1): err= 0: pid=1958999: Sat Jul 13 08:05:25 2024 00:18:34.151 read: IOPS=514, BW=2059KiB/s (2108kB/s)(2092KiB/1016msec) 00:18:34.151 slat (nsec): min=9156, max=52986, avg=18740.06, stdev=4487.29 00:18:34.151 clat (usec): min=328, max=41096, avg=1273.02, stdev=5552.28 00:18:34.151 lat (usec): min=345, max=41105, avg=1291.76, stdev=5551.77 00:18:34.151 clat percentiles (usec): 00:18:34.151 | 1.00th=[ 392], 5.00th=[ 412], 10.00th=[ 424], 20.00th=[ 461], 00:18:34.151 | 30.00th=[ 474], 40.00th=[ 486], 50.00th=[ 494], 60.00th=[ 506], 00:18:34.151 | 70.00th=[ 519], 80.00th=[ 545], 90.00th=[ 578], 95.00th=[ 619], 00:18:34.151 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:34.151 | 99.99th=[41157] 00:18:34.151 write: IOPS=1007, BW=4031KiB/s (4128kB/s)(4096KiB/1016msec); 0 zone resets 00:18:34.151 slat (nsec): min=8468, max=71042, avg=18738.67, stdev=9185.65 00:18:34.151 clat (usec): min=164, max=1615, avg=305.40, stdev=128.82 00:18:34.151 lat (usec): min=174, max=1628, avg=324.14, stdev=131.50 00:18:34.151 clat percentiles (usec): 00:18:34.151 | 1.00th=[ 172], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 194], 00:18:34.151 | 30.00th=[ 204], 40.00th=[ 245], 50.00th=[ 277], 60.00th=[ 306], 00:18:34.151 | 70.00th=[ 355], 80.00th=[ 408], 90.00th=[ 469], 95.00th=[ 519], 00:18:34.151 | 99.00th=[ 660], 99.50th=[ 693], 99.90th=[ 1106], 99.95th=[ 1614], 00:18:34.151 | 99.99th=[ 1614] 00:18:34.151 bw ( KiB/s): min= 3608, max= 4584, per=29.46%, avg=4096.00, stdev=690.14, samples=2 00:18:34.151 iops : min= 902, max= 1146, avg=1024.00, stdev=172.53, samples=2 00:18:34.151 lat (usec) : 250=27.34%, 500=52.88%, 750=18.81%, 1000=0.06% 00:18:34.151 lat (msec) : 2=0.26%, 50=0.65% 00:18:34.151 cpu : usr=2.36%, sys=3.35%, ctx=1548, majf=0, minf=1 00:18:34.151 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.151 issued rwts: total=523,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.151 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:34.151 job2: (groupid=0, jobs=1): err= 0: pid=1959000: Sat Jul 13 08:05:25 2024 00:18:34.151 read: IOPS=19, BW=78.6KiB/s (80.5kB/s)(80.0KiB/1018msec) 00:18:34.151 slat (nsec): min=15747, max=35403, avg=24396.75, stdev=8595.11 00:18:34.151 clat (usec): min=40778, max=42050, avg=41355.75, stdev=521.84 00:18:34.151 lat (usec): min=40797, max=42070, avg=41380.15, stdev=518.04 00:18:34.151 clat percentiles (usec): 00:18:34.151 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:18:34.151 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:34.151 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:34.151 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:34.151 | 99.99th=[42206] 00:18:34.151 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:18:34.151 slat (nsec): min=9252, max=59995, avg=18314.64, stdev=8904.28 00:18:34.151 clat (usec): min=192, max=1567, avg=347.78, stdev=139.51 00:18:34.151 lat (usec): min=204, max=1583, avg=366.10, stdev=139.35 00:18:34.151 clat percentiles (usec): 00:18:34.151 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 223], 20.00th=[ 231], 00:18:34.151 | 30.00th=[ 247], 40.00th=[ 265], 50.00th=[ 306], 60.00th=[ 371], 00:18:34.151 | 70.00th=[ 416], 80.00th=[ 465], 90.00th=[ 515], 95.00th=[ 537], 00:18:34.151 | 99.00th=[ 652], 99.50th=[ 1123], 99.90th=[ 1565], 99.95th=[ 1565], 00:18:34.151 | 99.99th=[ 1565] 00:18:34.151 bw ( KiB/s): min= 4096, max= 4096, per=29.46%, avg=4096.00, stdev= 0.00, samples=1 00:18:34.151 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:34.151 lat (usec) : 250=31.20%, 500=51.32%, 750=13.16% 00:18:34.151 lat (msec) : 2=0.56%, 50=3.76% 00:18:34.151 cpu : usr=0.79%, sys=0.98%, ctx=533, majf=0, minf=1 00:18:34.151 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.151 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.151 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:34.151 job3: (groupid=0, jobs=1): err= 0: pid=1959001: Sat Jul 13 08:05:25 2024 00:18:34.151 read: IOPS=21, BW=85.4KiB/s (87.4kB/s)(88.0KiB/1031msec) 00:18:34.151 slat (nsec): min=7779, max=32867, avg=22310.14, stdev=9039.59 00:18:34.151 clat (usec): min=40888, max=42004, avg=41226.57, stdev=439.50 00:18:34.151 lat (usec): min=40921, max=42019, avg=41248.88, stdev=434.50 00:18:34.151 clat percentiles (usec): 00:18:34.151 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:34.151 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:34.151 | 70.00th=[41157], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:34.151 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:34.151 | 99.99th=[42206] 00:18:34.151 write: IOPS=496, BW=1986KiB/s (2034kB/s)(2048KiB/1031msec); 0 zone resets 00:18:34.151 slat (nsec): min=7326, max=39126, avg=12286.46, stdev=4951.99 00:18:34.152 clat (usec): min=177, max=335, avg=225.15, stdev=25.45 00:18:34.152 lat (usec): min=186, max=365, avg=237.44, stdev=26.33 00:18:34.152 clat percentiles (usec): 00:18:34.152 | 1.00th=[ 184], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 208], 00:18:34.152 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:18:34.152 | 70.00th=[ 231], 80.00th=[ 241], 90.00th=[ 273], 95.00th=[ 277], 00:18:34.152 | 99.00th=[ 289], 99.50th=[ 302], 99.90th=[ 334], 99.95th=[ 334], 00:18:34.152 | 99.99th=[ 334] 00:18:34.152 bw ( KiB/s): min= 4096, max= 4096, per=29.46%, avg=4096.00, stdev= 0.00, samples=1 00:18:34.152 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:34.152 lat (usec) : 250=80.90%, 500=14.98% 00:18:34.152 lat (msec) : 50=4.12% 00:18:34.152 cpu : usr=0.39%, sys=0.58%, ctx=536, majf=0, minf=1 00:18:34.152 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:34.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:34.152 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:34.152 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:34.152 00:18:34.152 Run status group 0 (all jobs): 00:18:34.152 READ: bw=8144KiB/s (8339kB/s), 78.6KiB/s-6130KiB/s (80.5kB/s-6277kB/s), io=8396KiB (8598kB), run=1001-1031msec 00:18:34.152 WRITE: bw=13.6MiB/s (14.2MB/s), 1986KiB/s-6138KiB/s (2034kB/s-6285kB/s), io=14.0MiB (14.7MB), run=1001-1031msec 00:18:34.152 00:18:34.152 Disk stats (read/write): 00:18:34.152 nvme0n1: ios=1076/1536, merge=0/0, ticks=393/349, in_queue=742, util=83.07% 00:18:34.152 nvme0n2: ios=542/1024, merge=0/0, ticks=1407/304, in_queue=1711, util=94.35% 00:18:34.152 nvme0n3: ios=61/512, merge=0/0, ticks=803/165, in_queue=968, util=98.16% 00:18:34.152 nvme0n4: ios=40/512, merge=0/0, ticks=1569/113, in_queue=1682, util=98.46% 00:18:34.152 08:05:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:34.152 [global] 00:18:34.152 thread=1 00:18:34.152 invalidate=1 00:18:34.152 rw=write 00:18:34.152 time_based=1 00:18:34.152 runtime=1 00:18:34.152 ioengine=libaio 00:18:34.152 direct=1 00:18:34.152 bs=4096 00:18:34.152 iodepth=128 00:18:34.152 norandommap=0 00:18:34.152 numjobs=1 00:18:34.152 00:18:34.152 verify_dump=1 00:18:34.152 verify_backlog=512 00:18:34.152 verify_state_save=0 00:18:34.152 do_verify=1 00:18:34.152 verify=crc32c-intel 00:18:34.152 [job0] 00:18:34.152 filename=/dev/nvme0n1 00:18:34.152 [job1] 00:18:34.152 filename=/dev/nvme0n2 00:18:34.152 [job2] 00:18:34.152 filename=/dev/nvme0n3 00:18:34.152 [job3] 00:18:34.152 filename=/dev/nvme0n4 00:18:34.152 Could not set queue depth (nvme0n1) 00:18:34.152 Could not set queue depth (nvme0n2) 00:18:34.152 Could not set queue depth (nvme0n3) 00:18:34.152 Could not set queue depth (nvme0n4) 00:18:34.152 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:34.152 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:34.152 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:34.152 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:34.152 fio-3.35 00:18:34.152 Starting 4 threads 00:18:35.526 00:18:35.526 job0: (groupid=0, jobs=1): err= 0: pid=1959234: Sat Jul 13 08:05:27 2024 00:18:35.526 read: IOPS=2843, BW=11.1MiB/s (11.6MB/s)(11.2MiB/1004msec) 00:18:35.526 slat (usec): min=3, max=10867, avg=120.33, stdev=709.86 00:18:35.526 clat (usec): min=3419, max=69402, avg=16526.31, stdev=6398.43 00:18:35.526 lat (usec): min=7355, max=72070, avg=16646.65, stdev=6433.56 00:18:35.526 clat percentiles (usec): 00:18:35.526 | 1.00th=[ 7832], 5.00th=[11731], 10.00th=[12780], 20.00th=[13304], 00:18:35.526 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14353], 60.00th=[14615], 00:18:35.526 | 70.00th=[15401], 80.00th=[18220], 90.00th=[22414], 95.00th=[30016], 00:18:35.526 | 99.00th=[39060], 99.50th=[51119], 99.90th=[69731], 99.95th=[69731], 00:18:35.527 | 99.99th=[69731] 00:18:35.527 write: IOPS=3059, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1004msec); 0 zone resets 00:18:35.527 slat (usec): min=4, max=22583, avg=202.60, stdev=1139.81 00:18:35.527 clat (usec): min=8896, max=91356, avg=26030.67, stdev=14959.34 00:18:35.527 lat (usec): min=8902, max=91377, avg=26233.27, stdev=15056.58 00:18:35.527 clat percentiles (usec): 00:18:35.527 | 1.00th=[11076], 5.00th=[12387], 10.00th=[12649], 20.00th=[13435], 00:18:35.527 | 30.00th=[17695], 40.00th=[21103], 50.00th=[21365], 60.00th=[22938], 00:18:35.527 | 70.00th=[28443], 80.00th=[30802], 90.00th=[49021], 95.00th=[57410], 00:18:35.527 | 99.00th=[82314], 99.50th=[84411], 99.90th=[91751], 99.95th=[91751], 00:18:35.527 | 99.99th=[91751] 00:18:35.527 bw ( KiB/s): min=12288, max=12288, per=19.09%, avg=12288.00, stdev= 0.00, samples=2 00:18:35.527 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:18:35.527 lat (msec) : 4=0.02%, 10=1.21%, 20=57.25%, 50=36.43%, 100=5.10% 00:18:35.527 cpu : usr=4.69%, sys=6.68%, ctx=313, majf=0, minf=1 00:18:35.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:35.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:35.527 issued rwts: total=2855,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:35.527 job1: (groupid=0, jobs=1): err= 0: pid=1959236: Sat Jul 13 08:05:27 2024 00:18:35.527 read: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec) 00:18:35.527 slat (usec): min=3, max=15052, avg=140.62, stdev=923.72 00:18:35.527 clat (usec): min=5747, max=64159, avg=16279.61, stdev=7813.89 00:18:35.527 lat (usec): min=5755, max=64177, avg=16420.23, stdev=7903.01 00:18:35.527 clat percentiles (usec): 00:18:35.527 | 1.00th=[ 7046], 5.00th=[10290], 10.00th=[10945], 20.00th=[11469], 00:18:35.527 | 30.00th=[12387], 40.00th=[13042], 50.00th=[14222], 60.00th=[15401], 00:18:35.527 | 70.00th=[15926], 80.00th=[18744], 90.00th=[23725], 95.00th=[31065], 00:18:35.527 | 99.00th=[52167], 99.50th=[57934], 99.90th=[64226], 99.95th=[64226], 00:18:35.527 | 99.99th=[64226] 00:18:35.527 write: IOPS=3451, BW=13.5MiB/s (14.1MB/s)(13.6MiB/1010msec); 0 zone resets 00:18:35.527 slat (usec): min=4, max=9465, avg=152.24, stdev=667.87 00:18:35.527 clat (usec): min=2898, max=64180, avg=22393.47, stdev=12437.02 00:18:35.527 lat (usec): min=2908, max=64200, avg=22545.72, stdev=12515.89 00:18:35.527 clat percentiles (usec): 00:18:35.527 | 1.00th=[ 4752], 5.00th=[ 7177], 10.00th=[ 7504], 20.00th=[11076], 00:18:35.527 | 30.00th=[12125], 40.00th=[16188], 50.00th=[21103], 60.00th=[21627], 00:18:35.527 | 70.00th=[28443], 80.00th=[34866], 90.00th=[40633], 95.00th=[46400], 00:18:35.527 | 99.00th=[52691], 99.50th=[52691], 99.90th=[53216], 99.95th=[64226], 00:18:35.527 | 99.99th=[64226] 00:18:35.527 bw ( KiB/s): min=12336, max=14536, per=20.88%, avg=13436.00, stdev=1555.63, samples=2 00:18:35.527 iops : min= 3084, max= 3634, avg=3359.00, stdev=388.91, samples=2 00:18:35.527 lat (msec) : 4=0.27%, 10=9.62%, 20=53.25%, 50=34.69%, 100=2.17% 00:18:35.527 cpu : usr=5.45%, sys=6.64%, ctx=369, majf=0, minf=1 00:18:35.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:18:35.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:35.527 issued rwts: total=3072,3486,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:35.527 job2: (groupid=0, jobs=1): err= 0: pid=1959237: Sat Jul 13 08:05:27 2024 00:18:35.527 read: IOPS=5028, BW=19.6MiB/s (20.6MB/s)(19.8MiB/1009msec) 00:18:35.527 slat (usec): min=3, max=11467, avg=98.90, stdev=691.90 00:18:35.527 clat (usec): min=3318, max=23175, avg=13181.71, stdev=3000.93 00:18:35.527 lat (usec): min=5081, max=23183, avg=13280.61, stdev=3035.65 00:18:35.527 clat percentiles (usec): 00:18:35.527 | 1.00th=[ 7963], 5.00th=[ 9503], 10.00th=[10028], 20.00th=[11469], 00:18:35.527 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12911], 00:18:35.527 | 70.00th=[13566], 80.00th=[15533], 90.00th=[17695], 95.00th=[19530], 00:18:35.527 | 99.00th=[22414], 99.50th=[22414], 99.90th=[23200], 99.95th=[23200], 00:18:35.527 | 99.99th=[23200] 00:18:35.527 write: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec); 0 zone resets 00:18:35.527 slat (usec): min=4, max=33547, avg=86.62, stdev=668.07 00:18:35.527 clat (usec): min=2774, max=43948, avg=11499.79, stdev=4105.29 00:18:35.527 lat (usec): min=2782, max=43980, avg=11586.41, stdev=4134.91 00:18:35.527 clat percentiles (usec): 00:18:35.527 | 1.00th=[ 4359], 5.00th=[ 6521], 10.00th=[ 7373], 20.00th=[ 7701], 00:18:35.527 | 30.00th=[ 8848], 40.00th=[11731], 50.00th=[12256], 60.00th=[12518], 00:18:35.527 | 70.00th=[12649], 80.00th=[12911], 90.00th=[15926], 95.00th=[16188], 00:18:35.527 | 99.00th=[35914], 99.50th=[35914], 99.90th=[36439], 99.95th=[36439], 00:18:35.527 | 99.99th=[43779] 00:18:35.527 bw ( KiB/s): min=20480, max=20480, per=31.82%, avg=20480.00, stdev= 0.00, samples=2 00:18:35.527 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:18:35.527 lat (msec) : 4=0.29%, 10=20.99%, 20=76.01%, 50=2.71% 00:18:35.527 cpu : usr=8.23%, sys=11.11%, ctx=459, majf=0, minf=1 00:18:35.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:35.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:35.527 issued rwts: total=5074,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:35.527 job3: (groupid=0, jobs=1): err= 0: pid=1959238: Sat Jul 13 08:05:27 2024 00:18:35.527 read: IOPS=4832, BW=18.9MiB/s (19.8MB/s)(19.7MiB/1044msec) 00:18:35.527 slat (usec): min=3, max=5614, avg=92.29, stdev=500.73 00:18:35.527 clat (usec): min=7771, max=54986, avg=13265.97, stdev=6092.28 00:18:35.527 lat (usec): min=8251, max=54992, avg=13358.26, stdev=6101.87 00:18:35.527 clat percentiles (usec): 00:18:35.527 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10945], 20.00th=[11731], 00:18:35.527 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12256], 60.00th=[12387], 00:18:35.527 | 70.00th=[12780], 80.00th=[13042], 90.00th=[14222], 95.00th=[15401], 00:18:35.527 | 99.00th=[50070], 99.50th=[54264], 99.90th=[54789], 99.95th=[54789], 00:18:35.527 | 99.99th=[54789] 00:18:35.527 write: IOPS=4904, BW=19.2MiB/s (20.1MB/s)(20.0MiB/1044msec); 0 zone resets 00:18:35.527 slat (usec): min=4, max=32083, avg=93.00, stdev=682.26 00:18:35.527 clat (usec): min=6624, max=42072, avg=12356.01, stdev=3500.89 00:18:35.527 lat (usec): min=6960, max=42096, avg=12449.02, stdev=3507.73 00:18:35.527 clat percentiles (usec): 00:18:35.527 | 1.00th=[ 7570], 5.00th=[ 9372], 10.00th=[10683], 20.00th=[11600], 00:18:35.527 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:18:35.527 | 70.00th=[12518], 80.00th=[12649], 90.00th=[12911], 95.00th=[14484], 00:18:35.527 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:18:35.527 | 99.99th=[42206] 00:18:35.527 bw ( KiB/s): min=20480, max=20480, per=31.82%, avg=20480.00, stdev= 0.00, samples=2 00:18:35.527 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:18:35.527 lat (msec) : 10=5.34%, 20=92.77%, 50=1.40%, 100=0.49% 00:18:35.527 cpu : usr=7.29%, sys=11.41%, ctx=361, majf=0, minf=1 00:18:35.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:35.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:35.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:35.527 issued rwts: total=5045,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:35.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:35.527 00:18:35.527 Run status group 0 (all jobs): 00:18:35.527 READ: bw=60.0MiB/s (63.0MB/s), 11.1MiB/s-19.6MiB/s (11.6MB/s-20.6MB/s), io=62.7MiB (65.7MB), run=1004-1044msec 00:18:35.527 WRITE: bw=62.9MiB/s (65.9MB/s), 12.0MiB/s-19.8MiB/s (12.5MB/s-20.8MB/s), io=65.6MiB (68.8MB), run=1004-1044msec 00:18:35.527 00:18:35.527 Disk stats (read/write): 00:18:35.527 nvme0n1: ios=2605/2575, merge=0/0, ticks=20260/31542, in_queue=51802, util=87.47% 00:18:35.527 nvme0n2: ios=2610/2839, merge=0/0, ticks=41192/61047, in_queue=102239, util=91.57% 00:18:35.527 nvme0n3: ios=4145/4439, merge=0/0, ticks=51768/46904, in_queue=98672, util=94.78% 00:18:35.527 nvme0n4: ios=4145/4480, merge=0/0, ticks=24061/23994, in_queue=48055, util=95.58% 00:18:35.527 08:05:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:35.528 [global] 00:18:35.528 thread=1 00:18:35.528 invalidate=1 00:18:35.528 rw=randwrite 00:18:35.528 time_based=1 00:18:35.528 runtime=1 00:18:35.528 ioengine=libaio 00:18:35.528 direct=1 00:18:35.528 bs=4096 00:18:35.528 iodepth=128 00:18:35.528 norandommap=0 00:18:35.528 numjobs=1 00:18:35.528 00:18:35.528 verify_dump=1 00:18:35.528 verify_backlog=512 00:18:35.528 verify_state_save=0 00:18:35.528 do_verify=1 00:18:35.528 verify=crc32c-intel 00:18:35.528 [job0] 00:18:35.528 filename=/dev/nvme0n1 00:18:35.528 [job1] 00:18:35.528 filename=/dev/nvme0n2 00:18:35.528 [job2] 00:18:35.528 filename=/dev/nvme0n3 00:18:35.528 [job3] 00:18:35.528 filename=/dev/nvme0n4 00:18:35.528 Could not set queue depth (nvme0n1) 00:18:35.528 Could not set queue depth (nvme0n2) 00:18:35.528 Could not set queue depth (nvme0n3) 00:18:35.528 Could not set queue depth (nvme0n4) 00:18:35.785 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:35.785 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:35.785 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:35.785 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:35.785 fio-3.35 00:18:35.785 Starting 4 threads 00:18:37.158 00:18:37.158 job0: (groupid=0, jobs=1): err= 0: pid=1959464: Sat Jul 13 08:05:28 2024 00:18:37.158 read: IOPS=2527, BW=9.87MiB/s (10.4MB/s)(10.0MiB/1013msec) 00:18:37.158 slat (usec): min=3, max=38755, avg=158.01, stdev=1470.47 00:18:37.158 clat (usec): min=6289, max=79579, avg=22023.88, stdev=11971.03 00:18:37.159 lat (usec): min=6311, max=79620, avg=22181.89, stdev=12078.92 00:18:37.159 clat percentiles (usec): 00:18:37.159 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[15401], 00:18:37.159 | 30.00th=[16712], 40.00th=[18220], 50.00th=[18744], 60.00th=[19530], 00:18:37.159 | 70.00th=[20317], 80.00th=[26870], 90.00th=[34866], 95.00th=[51119], 00:18:37.159 | 99.00th=[69731], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:18:37.159 | 99.99th=[79168] 00:18:37.159 write: IOPS=2691, BW=10.5MiB/s (11.0MB/s)(10.6MiB/1013msec); 0 zone resets 00:18:37.159 slat (usec): min=4, max=15261, avg=188.62, stdev=1017.19 00:18:37.159 clat (usec): min=1215, max=95891, avg=26458.57, stdev=21516.24 00:18:37.159 lat (usec): min=1243, max=95912, avg=26647.20, stdev=21657.59 00:18:37.159 clat percentiles (usec): 00:18:37.159 | 1.00th=[ 6915], 5.00th=[ 9241], 10.00th=[11469], 20.00th=[13173], 00:18:37.159 | 30.00th=[15926], 40.00th=[17433], 50.00th=[18482], 60.00th=[19006], 00:18:37.159 | 70.00th=[23725], 80.00th=[31589], 90.00th=[58983], 95.00th=[84411], 00:18:37.159 | 99.00th=[93848], 99.50th=[94897], 99.90th=[95945], 99.95th=[95945], 00:18:37.159 | 99.99th=[95945] 00:18:37.159 bw ( KiB/s): min=10075, max=10696, per=17.46%, avg=10385.50, stdev=439.11, samples=2 00:18:37.159 iops : min= 2518, max= 2674, avg=2596.00, stdev=110.31, samples=2 00:18:37.159 lat (msec) : 2=0.02%, 10=7.64%, 20=56.73%, 50=26.11%, 100=9.50% 00:18:37.159 cpu : usr=4.84%, sys=6.82%, ctx=265, majf=0, minf=13 00:18:37.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:18:37.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:37.159 issued rwts: total=2560,2726,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:37.159 job1: (groupid=0, jobs=1): err= 0: pid=1959465: Sat Jul 13 08:05:28 2024 00:18:37.159 read: IOPS=4123, BW=16.1MiB/s (16.9MB/s)(16.2MiB/1008msec) 00:18:37.159 slat (usec): min=2, max=17590, avg=96.60, stdev=736.56 00:18:37.159 clat (usec): min=3755, max=58777, avg=13365.75, stdev=7414.80 00:18:37.159 lat (usec): min=3796, max=58791, avg=13462.34, stdev=7474.95 00:18:37.159 clat percentiles (usec): 00:18:37.159 | 1.00th=[ 7177], 5.00th=[ 8356], 10.00th=[ 8717], 20.00th=[ 9372], 00:18:37.159 | 30.00th=[ 9896], 40.00th=[10683], 50.00th=[11207], 60.00th=[11863], 00:18:37.159 | 70.00th=[13173], 80.00th=[15533], 90.00th=[19792], 95.00th=[25297], 00:18:37.159 | 99.00th=[47973], 99.50th=[56886], 99.90th=[58983], 99.95th=[58983], 00:18:37.159 | 99.99th=[58983] 00:18:37.159 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:18:37.159 slat (usec): min=3, max=30259, avg=108.21, stdev=950.91 00:18:37.159 clat (usec): min=2311, max=87180, avg=15397.93, stdev=13246.62 00:18:37.159 lat (usec): min=2319, max=87213, avg=15506.14, stdev=13325.32 00:18:37.159 clat percentiles (usec): 00:18:37.159 | 1.00th=[ 5014], 5.00th=[ 5800], 10.00th=[ 6063], 20.00th=[ 6849], 00:18:37.159 | 30.00th=[ 8455], 40.00th=[10028], 50.00th=[10945], 60.00th=[12518], 00:18:37.159 | 70.00th=[13042], 80.00th=[18482], 90.00th=[33817], 95.00th=[44827], 00:18:37.159 | 99.00th=[69731], 99.50th=[74974], 99.90th=[87557], 99.95th=[87557], 00:18:37.159 | 99.99th=[87557] 00:18:37.159 bw ( KiB/s): min=14952, max=21368, per=30.53%, avg=18160.00, stdev=4536.80, samples=2 00:18:37.159 iops : min= 3738, max= 5342, avg=4540.00, stdev=1134.20, samples=2 00:18:37.159 lat (msec) : 4=0.18%, 10=35.53%, 20=52.38%, 50=9.03%, 100=2.88% 00:18:37.159 cpu : usr=7.15%, sys=11.42%, ctx=252, majf=0, minf=17 00:18:37.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:37.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:37.159 issued rwts: total=4156,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:37.159 job2: (groupid=0, jobs=1): err= 0: pid=1959466: Sat Jul 13 08:05:28 2024 00:18:37.159 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:18:37.159 slat (usec): min=2, max=20100, avg=100.37, stdev=803.27 00:18:37.159 clat (usec): min=6539, max=53409, avg=14548.29, stdev=5912.78 00:18:37.159 lat (usec): min=6543, max=53419, avg=14648.66, stdev=5954.15 00:18:37.159 clat percentiles (usec): 00:18:37.159 | 1.00th=[ 7898], 5.00th=[10683], 10.00th=[11600], 20.00th=[11863], 00:18:37.159 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[13042], 00:18:37.159 | 70.00th=[13435], 80.00th=[14484], 90.00th=[20055], 95.00th=[29492], 00:18:37.159 | 99.00th=[40633], 99.50th=[40633], 99.90th=[40633], 99.95th=[42730], 00:18:37.159 | 99.99th=[53216] 00:18:37.159 write: IOPS=4655, BW=18.2MiB/s (19.1MB/s)(18.3MiB/1006msec); 0 zone resets 00:18:37.159 slat (usec): min=3, max=12232, avg=83.38, stdev=622.64 00:18:37.159 clat (usec): min=388, max=75631, avg=12864.69, stdev=4552.22 00:18:37.159 lat (usec): min=4723, max=75642, avg=12948.07, stdev=4552.85 00:18:37.159 clat percentiles (usec): 00:18:37.159 | 1.00th=[ 6063], 5.00th=[ 7111], 10.00th=[ 8586], 20.00th=[10945], 00:18:37.159 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12649], 60.00th=[13173], 00:18:37.159 | 70.00th=[13566], 80.00th=[14091], 90.00th=[16712], 95.00th=[17433], 00:18:37.159 | 99.00th=[24511], 99.50th=[37487], 99.90th=[67634], 99.95th=[76022], 00:18:37.159 | 99.99th=[76022] 00:18:37.159 bw ( KiB/s): min=16910, max=19944, per=30.98%, avg=18427.00, stdev=2145.36, samples=2 00:18:37.159 iops : min= 4227, max= 4986, avg=4606.50, stdev=536.69, samples=2 00:18:37.159 lat (usec) : 500=0.01% 00:18:37.159 lat (msec) : 10=9.77%, 20=84.06%, 50=5.94%, 100=0.22% 00:18:37.159 cpu : usr=4.18%, sys=6.57%, ctx=251, majf=0, minf=7 00:18:37.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:37.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:37.159 issued rwts: total=4608,4683,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:37.159 job3: (groupid=0, jobs=1): err= 0: pid=1959467: Sat Jul 13 08:05:28 2024 00:18:37.159 read: IOPS=2937, BW=11.5MiB/s (12.0MB/s)(12.0MiB/1049msec) 00:18:37.159 slat (usec): min=3, max=13963, avg=115.22, stdev=862.68 00:18:37.159 clat (usec): min=4436, max=51638, avg=15139.05, stdev=5705.10 00:18:37.159 lat (usec): min=4443, max=52990, avg=15254.26, stdev=5781.71 00:18:37.159 clat percentiles (usec): 00:18:37.159 | 1.00th=[ 8848], 5.00th=[ 9634], 10.00th=[10159], 20.00th=[10945], 00:18:37.159 | 30.00th=[11207], 40.00th=[11600], 50.00th=[13698], 60.00th=[14615], 00:18:37.159 | 70.00th=[16450], 80.00th=[19792], 90.00th=[23462], 95.00th=[26346], 00:18:37.159 | 99.00th=[32637], 99.50th=[32900], 99.90th=[51643], 99.95th=[51643], 00:18:37.159 | 99.99th=[51643] 00:18:37.159 write: IOPS=3416, BW=13.3MiB/s (14.0MB/s)(14.0MiB/1049msec); 0 zone resets 00:18:37.159 slat (usec): min=4, max=52293, avg=168.06, stdev=1350.75 00:18:37.159 clat (usec): min=1177, max=207457, avg=24081.60, stdev=34930.35 00:18:37.159 lat (usec): min=1193, max=207481, avg=24249.66, stdev=35153.93 00:18:37.159 clat percentiles (msec): 00:18:37.159 | 1.00th=[ 5], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 9], 00:18:37.159 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:18:37.159 | 70.00th=[ 16], 80.00th=[ 18], 90.00th=[ 63], 95.00th=[ 107], 00:18:37.159 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 207], 99.95th=[ 207], 00:18:37.159 | 99.99th=[ 207] 00:18:37.159 bw ( KiB/s): min=10656, max=17037, per=23.27%, avg=13846.50, stdev=4512.05, samples=2 00:18:37.159 iops : min= 2664, max= 4259, avg=3461.50, stdev=1127.84, samples=2 00:18:37.159 lat (msec) : 2=0.15%, 4=0.11%, 10=16.52%, 20=65.24%, 50=11.03% 00:18:37.159 lat (msec) : 100=4.10%, 250=2.87% 00:18:37.159 cpu : usr=5.82%, sys=6.49%, ctx=277, majf=0, minf=15 00:18:37.159 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:37.159 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:37.159 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:37.159 issued rwts: total=3081,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:37.159 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:37.159 00:18:37.159 Run status group 0 (all jobs): 00:18:37.159 READ: bw=53.6MiB/s (56.2MB/s), 9.87MiB/s-17.9MiB/s (10.4MB/s-18.8MB/s), io=56.3MiB (59.0MB), run=1006-1049msec 00:18:37.159 WRITE: bw=58.1MiB/s (60.9MB/s), 10.5MiB/s-18.2MiB/s (11.0MB/s-19.1MB/s), io=60.9MiB (63.9MB), run=1006-1049msec 00:18:37.159 00:18:37.159 Disk stats (read/write): 00:18:37.159 nvme0n1: ios=2039/2072, merge=0/0, ticks=38980/57714, in_queue=96694, util=97.90% 00:18:37.159 nvme0n2: ios=3240/3584, merge=0/0, ticks=39996/50144, in_queue=90140, util=99.39% 00:18:37.159 nvme0n3: ios=3634/4066, merge=0/0, ticks=37308/31612, in_queue=68920, util=99.06% 00:18:37.159 nvme0n4: ios=3101/3414, merge=0/0, ticks=44966/51933, in_queue=96899, util=98.11% 00:18:37.159 08:05:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:18:37.159 08:05:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1959635 00:18:37.159 08:05:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:37.159 08:05:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:18:37.159 [global] 00:18:37.159 thread=1 00:18:37.159 invalidate=1 00:18:37.159 rw=read 00:18:37.159 time_based=1 00:18:37.159 runtime=10 00:18:37.159 ioengine=libaio 00:18:37.159 direct=1 00:18:37.159 bs=4096 00:18:37.159 iodepth=1 00:18:37.159 norandommap=1 00:18:37.159 numjobs=1 00:18:37.159 00:18:37.159 [job0] 00:18:37.159 filename=/dev/nvme0n1 00:18:37.159 [job1] 00:18:37.159 filename=/dev/nvme0n2 00:18:37.159 [job2] 00:18:37.159 filename=/dev/nvme0n3 00:18:37.159 [job3] 00:18:37.159 filename=/dev/nvme0n4 00:18:37.159 Could not set queue depth (nvme0n1) 00:18:37.159 Could not set queue depth (nvme0n2) 00:18:37.159 Could not set queue depth (nvme0n3) 00:18:37.159 Could not set queue depth (nvme0n4) 00:18:37.159 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:37.159 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:37.159 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:37.159 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:37.159 fio-3.35 00:18:37.159 Starting 4 threads 00:18:40.437 08:05:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:40.437 08:05:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:40.437 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=11620352, buflen=4096 00:18:40.437 fio: pid=1959818, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:40.437 08:05:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:40.437 08:05:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:40.437 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=29741056, buflen=4096 00:18:40.437 fio: pid=1959817, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:40.732 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=22126592, buflen=4096 00:18:40.732 fio: pid=1959815, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:40.732 08:05:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:40.732 08:05:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:40.989 08:05:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:40.989 08:05:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:40.989 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=17195008, buflen=4096 00:18:40.989 fio: pid=1959816, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:40.989 00:18:40.989 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1959815: Sat Jul 13 08:05:32 2024 00:18:40.989 read: IOPS=1584, BW=6339KiB/s (6491kB/s)(21.1MiB/3409msec) 00:18:40.989 slat (usec): min=4, max=29668, avg=19.87, stdev=474.60 00:18:40.989 clat (usec): min=241, max=42077, avg=604.86, stdev=3435.35 00:18:40.989 lat (usec): min=248, max=42100, avg=624.73, stdev=3467.94 00:18:40.989 clat percentiles (usec): 00:18:40.989 | 1.00th=[ 249], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 289], 00:18:40.989 | 30.00th=[ 306], 40.00th=[ 310], 50.00th=[ 314], 60.00th=[ 318], 00:18:40.989 | 70.00th=[ 322], 80.00th=[ 326], 90.00th=[ 347], 95.00th=[ 416], 00:18:40.989 | 99.00th=[ 611], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:18:40.989 | 99.99th=[42206] 00:18:40.989 bw ( KiB/s): min= 104, max=12312, per=29.33%, avg=6244.00, stdev=4864.21, samples=6 00:18:40.989 iops : min= 26, max= 3078, avg=1561.00, stdev=1216.05, samples=6 00:18:40.989 lat (usec) : 250=1.31%, 500=96.46%, 750=1.46%, 1000=0.04% 00:18:40.989 lat (msec) : 50=0.70% 00:18:40.989 cpu : usr=0.85%, sys=1.94%, ctx=5411, majf=0, minf=1 00:18:40.989 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:40.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.989 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.989 issued rwts: total=5403,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.989 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:40.989 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1959816: Sat Jul 13 08:05:32 2024 00:18:40.989 read: IOPS=1134, BW=4537KiB/s (4646kB/s)(16.4MiB/3701msec) 00:18:40.989 slat (usec): min=4, max=8797, avg=19.32, stdev=193.29 00:18:40.989 clat (usec): min=240, max=43261, avg=853.19, stdev=4512.55 00:18:40.989 lat (usec): min=248, max=49906, avg=870.44, stdev=4537.15 00:18:40.989 clat percentiles (usec): 00:18:40.989 | 1.00th=[ 253], 5.00th=[ 269], 10.00th=[ 281], 20.00th=[ 293], 00:18:40.989 | 30.00th=[ 310], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 355], 00:18:40.989 | 70.00th=[ 371], 80.00th=[ 392], 90.00th=[ 424], 95.00th=[ 490], 00:18:40.989 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:18:40.989 | 99.99th=[43254] 00:18:40.989 bw ( KiB/s): min= 96, max=11184, per=22.51%, avg=4792.57, stdev=5525.96, samples=7 00:18:40.989 iops : min= 24, max= 2796, avg=1198.14, stdev=1381.49, samples=7 00:18:40.989 lat (usec) : 250=0.69%, 500=95.09%, 750=2.76%, 1000=0.07% 00:18:40.989 lat (msec) : 2=0.10%, 4=0.02%, 50=1.24% 00:18:40.989 cpu : usr=0.84%, sys=2.08%, ctx=4207, majf=0, minf=1 00:18:40.989 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:40.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.989 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.989 issued rwts: total=4199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.989 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:40.989 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1959817: Sat Jul 13 08:05:32 2024 00:18:40.989 read: IOPS=2294, BW=9177KiB/s (9397kB/s)(28.4MiB/3165msec) 00:18:40.989 slat (usec): min=4, max=15616, avg=18.92, stdev=198.76 00:18:40.989 clat (usec): min=246, max=41833, avg=410.00, stdev=1192.77 00:18:40.989 lat (usec): min=251, max=41843, avg=428.93, stdev=1209.14 00:18:40.989 clat percentiles (usec): 00:18:40.989 | 1.00th=[ 265], 5.00th=[ 277], 10.00th=[ 289], 20.00th=[ 302], 00:18:40.989 | 30.00th=[ 322], 40.00th=[ 343], 50.00th=[ 367], 60.00th=[ 379], 00:18:40.989 | 70.00th=[ 396], 80.00th=[ 429], 90.00th=[ 490], 95.00th=[ 515], 00:18:40.989 | 99.00th=[ 611], 99.50th=[ 652], 99.90th=[ 3458], 99.95th=[41157], 00:18:40.989 | 99.99th=[41681] 00:18:40.989 bw ( KiB/s): min= 7144, max=11160, per=45.19%, avg=9620.00, stdev=1633.93, samples=6 00:18:40.989 iops : min= 1786, max= 2790, avg=2405.00, stdev=408.48, samples=6 00:18:40.989 lat (usec) : 250=0.04%, 500=92.83%, 750=6.98%, 1000=0.01% 00:18:40.989 lat (msec) : 4=0.03%, 20=0.01%, 50=0.08% 00:18:40.989 cpu : usr=1.74%, sys=4.58%, ctx=7265, majf=0, minf=1 00:18:40.989 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:40.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.989 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.989 issued rwts: total=7262,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.989 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:40.989 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1959818: Sat Jul 13 08:05:32 2024 00:18:40.989 read: IOPS=973, BW=3892KiB/s (3985kB/s)(11.1MiB/2916msec) 00:18:40.989 slat (nsec): min=4855, max=67413, avg=19046.27, stdev=11913.79 00:18:40.989 clat (usec): min=300, max=42004, avg=996.26, stdev=4838.11 00:18:40.989 lat (usec): min=307, max=42018, avg=1015.31, stdev=4837.33 00:18:40.989 clat percentiles (usec): 00:18:40.989 | 1.00th=[ 310], 5.00th=[ 322], 10.00th=[ 330], 20.00th=[ 347], 00:18:40.989 | 30.00th=[ 371], 40.00th=[ 388], 50.00th=[ 404], 60.00th=[ 424], 00:18:40.989 | 70.00th=[ 453], 80.00th=[ 482], 90.00th=[ 537], 95.00th=[ 570], 00:18:40.989 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:40.989 | 99.99th=[42206] 00:18:40.989 bw ( KiB/s): min= 96, max= 8432, per=21.25%, avg=4523.20, stdev=4091.80, samples=5 00:18:40.989 iops : min= 24, max= 2108, avg=1130.80, stdev=1022.95, samples=5 00:18:40.989 lat (usec) : 500=84.32%, 750=14.13% 00:18:40.989 lat (msec) : 2=0.07%, 10=0.04%, 50=1.41% 00:18:40.989 cpu : usr=0.65%, sys=2.23%, ctx=2840, majf=0, minf=1 00:18:40.989 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:40.989 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.989 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:40.989 issued rwts: total=2838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:40.989 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:40.989 00:18:40.989 Run status group 0 (all jobs): 00:18:40.989 READ: bw=20.8MiB/s (21.8MB/s), 3892KiB/s-9177KiB/s (3985kB/s-9397kB/s), io=76.9MiB (80.7MB), run=2916-3701msec 00:18:40.989 00:18:40.989 Disk stats (read/write): 00:18:40.989 nvme0n1: ios=5318/0, merge=0/0, ticks=3848/0, in_queue=3848, util=97.54% 00:18:40.989 nvme0n2: ios=4236/0, merge=0/0, ticks=4461/0, in_queue=4461, util=99.79% 00:18:40.989 nvme0n3: ios=7260/0, merge=0/0, ticks=2887/0, in_queue=2887, util=96.10% 00:18:40.989 nvme0n4: ios=2886/0, merge=0/0, ticks=2936/0, in_queue=2936, util=99.90% 00:18:41.247 08:05:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:41.247 08:05:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:41.503 08:05:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:41.503 08:05:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:41.760 08:05:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:41.760 08:05:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:42.016 08:05:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:42.017 08:05:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:42.279 08:05:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:18:42.279 08:05:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1959635 00:18:42.279 08:05:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:18:42.279 08:05:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:42.279 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:42.279 08:05:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:42.279 08:05:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:18:42.279 08:05:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:42.279 08:05:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:42.279 08:05:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:42.279 08:05:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:42.279 08:05:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:18:42.279 08:05:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:42.279 08:05:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:42.279 nvmf hotplug test: fio failed as expected 00:18:42.279 08:05:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:42.536 08:05:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:42.536 08:05:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:42.536 08:05:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:42.536 08:05:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:42.536 08:05:34 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:18:42.536 08:05:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:42.536 08:05:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:18:42.536 08:05:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:42.536 08:05:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:18:42.536 08:05:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:42.536 08:05:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:42.536 rmmod nvme_tcp 00:18:42.536 rmmod nvme_fabrics 00:18:42.536 rmmod nvme_keyring 00:18:42.536 08:05:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:42.536 08:05:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:18:42.536 08:05:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:18:42.536 08:05:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1957703 ']' 00:18:42.536 08:05:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1957703 00:18:42.536 08:05:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1957703 ']' 00:18:42.536 08:05:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1957703 00:18:42.536 08:05:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:18:42.536 08:05:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:42.536 08:05:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1957703 00:18:42.794 08:05:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:42.794 08:05:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:42.794 08:05:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1957703' 00:18:42.794 killing process with pid 1957703 00:18:42.794 08:05:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1957703 00:18:42.794 08:05:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1957703 00:18:43.052 08:05:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:43.052 08:05:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:43.052 08:05:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:43.052 08:05:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:43.052 08:05:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:43.052 08:05:34 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:43.052 08:05:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:43.052 08:05:34 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.952 08:05:36 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:44.952 00:18:44.952 real 0m23.277s 00:18:44.952 user 1m19.926s 00:18:44.952 sys 0m7.435s 00:18:44.952 08:05:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:44.952 08:05:36 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.952 ************************************ 00:18:44.952 END TEST nvmf_fio_target 00:18:44.952 ************************************ 00:18:44.952 08:05:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:44.952 08:05:36 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:44.952 08:05:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:44.952 08:05:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:44.952 08:05:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:44.952 ************************************ 00:18:44.952 START TEST nvmf_bdevio 00:18:44.952 ************************************ 00:18:44.952 08:05:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:44.952 * Looking for test storage... 00:18:44.952 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:44.952 08:05:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:44.952 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:18:45.210 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.210 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.210 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.210 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.210 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.210 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.210 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.210 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:18:45.211 08:05:36 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:47.111 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:47.111 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:47.111 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:47.111 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:47.111 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:47.111 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:47.111 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:47.111 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:47.111 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:47.111 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:47.111 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:47.111 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:47.111 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:47.111 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:47.111 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:47.111 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.111 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.111 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.111 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.111 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:47.112 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:47.112 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:47.112 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:47.112 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:47.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:18:47.112 00:18:47.112 --- 10.0.0.2 ping statistics --- 00:18:47.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.112 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:47.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:18:47.112 00:18:47.112 --- 10.0.0.1 ping statistics --- 00:18:47.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.112 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:47.112 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:47.370 08:05:38 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:47.370 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:47.370 08:05:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:47.370 08:05:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:47.370 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1962433 00:18:47.370 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:47.370 08:05:38 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1962433 00:18:47.371 08:05:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1962433 ']' 00:18:47.371 08:05:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.371 08:05:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:47.371 08:05:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.371 08:05:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:47.371 08:05:38 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:47.371 [2024-07-13 08:05:38.909839] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:47.371 [2024-07-13 08:05:38.909931] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.371 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.371 [2024-07-13 08:05:38.983082] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:47.371 [2024-07-13 08:05:39.079662] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.371 [2024-07-13 08:05:39.079715] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.371 [2024-07-13 08:05:39.079732] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.371 [2024-07-13 08:05:39.079745] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.371 [2024-07-13 08:05:39.079757] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.371 [2024-07-13 08:05:39.079840] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:47.371 [2024-07-13 08:05:39.079895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:47.371 [2024-07-13 08:05:39.079948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:47.371 [2024-07-13 08:05:39.079951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:48.306 [2024-07-13 08:05:39.901948] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:48.306 Malloc0 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:48.306 [2024-07-13 08:05:39.955207] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:48.306 08:05:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:48.306 { 00:18:48.306 "params": { 00:18:48.306 "name": "Nvme$subsystem", 00:18:48.306 "trtype": "$TEST_TRANSPORT", 00:18:48.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:48.306 "adrfam": "ipv4", 00:18:48.306 "trsvcid": "$NVMF_PORT", 00:18:48.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:48.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:48.306 "hdgst": ${hdgst:-false}, 00:18:48.306 "ddgst": ${ddgst:-false} 00:18:48.306 }, 00:18:48.307 "method": "bdev_nvme_attach_controller" 00:18:48.307 } 00:18:48.307 EOF 00:18:48.307 )") 00:18:48.307 08:05:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:48.307 08:05:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:48.307 08:05:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:48.307 08:05:39 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:48.307 "params": { 00:18:48.307 "name": "Nvme1", 00:18:48.307 "trtype": "tcp", 00:18:48.307 "traddr": "10.0.0.2", 00:18:48.307 "adrfam": "ipv4", 00:18:48.307 "trsvcid": "4420", 00:18:48.307 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:48.307 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:48.307 "hdgst": false, 00:18:48.307 "ddgst": false 00:18:48.307 }, 00:18:48.307 "method": "bdev_nvme_attach_controller" 00:18:48.307 }' 00:18:48.307 [2024-07-13 08:05:40.001610] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:18:48.307 [2024-07-13 08:05:40.001695] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1962587 ] 00:18:48.307 EAL: No free 2048 kB hugepages reported on node 1 00:18:48.565 [2024-07-13 08:05:40.066493] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:48.565 [2024-07-13 08:05:40.155435] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.565 [2024-07-13 08:05:40.155483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.565 [2024-07-13 08:05:40.155486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.823 I/O targets: 00:18:48.823 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:48.823 00:18:48.823 00:18:48.823 CUnit - A unit testing framework for C - Version 2.1-3 00:18:48.823 http://cunit.sourceforge.net/ 00:18:48.823 00:18:48.823 00:18:48.823 Suite: bdevio tests on: Nvme1n1 00:18:48.823 Test: blockdev write read block ...passed 00:18:48.823 Test: blockdev write zeroes read block ...passed 00:18:48.823 Test: blockdev write zeroes read no split ...passed 00:18:49.080 Test: blockdev write zeroes read split ...passed 00:18:49.080 Test: blockdev write zeroes read split partial ...passed 00:18:49.080 Test: blockdev reset ...[2024-07-13 08:05:40.653774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:49.080 [2024-07-13 08:05:40.653890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e77a60 (9): Bad file descriptor 00:18:49.080 [2024-07-13 08:05:40.797801] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:49.080 passed 00:18:49.339 Test: blockdev write read 8 blocks ...passed 00:18:49.339 Test: blockdev write read size > 128k ...passed 00:18:49.339 Test: blockdev write read invalid size ...passed 00:18:49.339 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:49.339 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:49.339 Test: blockdev write read max offset ...passed 00:18:49.339 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:49.339 Test: blockdev writev readv 8 blocks ...passed 00:18:49.339 Test: blockdev writev readv 30 x 1block ...passed 00:18:49.597 Test: blockdev writev readv block ...passed 00:18:49.597 Test: blockdev writev readv size > 128k ...passed 00:18:49.597 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:49.597 Test: blockdev comparev and writev ...[2024-07-13 08:05:41.095803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.597 [2024-07-13 08:05:41.095840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:49.597 [2024-07-13 08:05:41.095873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.597 [2024-07-13 08:05:41.095893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:49.597 [2024-07-13 08:05:41.096312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.597 [2024-07-13 08:05:41.096338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:49.597 [2024-07-13 08:05:41.096360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.597 [2024-07-13 08:05:41.096376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:49.597 [2024-07-13 08:05:41.096765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.597 [2024-07-13 08:05:41.096788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:49.597 [2024-07-13 08:05:41.096811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.597 [2024-07-13 08:05:41.096827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:49.597 [2024-07-13 08:05:41.097232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.597 [2024-07-13 08:05:41.097256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:49.597 [2024-07-13 08:05:41.097277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:49.597 [2024-07-13 08:05:41.097294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:49.597 passed 00:18:49.597 Test: blockdev nvme passthru rw ...passed 00:18:49.597 Test: blockdev nvme passthru vendor specific ...[2024-07-13 08:05:41.181310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:49.597 [2024-07-13 08:05:41.181385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:49.597 [2024-07-13 08:05:41.181643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:49.597 [2024-07-13 08:05:41.181668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:49.597 [2024-07-13 08:05:41.181842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:49.597 [2024-07-13 08:05:41.181873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:49.597 [2024-07-13 08:05:41.182054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:49.597 [2024-07-13 08:05:41.182077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:49.597 passed 00:18:49.597 Test: blockdev nvme admin passthru ...passed 00:18:49.597 Test: blockdev copy ...passed 00:18:49.597 00:18:49.597 Run Summary: Type Total Ran Passed Failed Inactive 00:18:49.597 suites 1 1 n/a 0 0 00:18:49.597 tests 23 23 23 0 0 00:18:49.597 asserts 152 152 152 0 n/a 00:18:49.597 00:18:49.597 Elapsed time = 1.575 seconds 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:49.855 rmmod nvme_tcp 00:18:49.855 rmmod nvme_fabrics 00:18:49.855 rmmod nvme_keyring 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1962433 ']' 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1962433 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1962433 ']' 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1962433 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1962433 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1962433' 00:18:49.855 killing process with pid 1962433 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1962433 00:18:49.855 08:05:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1962433 00:18:50.113 08:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:50.113 08:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:50.113 08:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:50.114 08:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:50.114 08:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:50.114 08:05:41 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:50.114 08:05:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:50.114 08:05:41 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.643 08:05:43 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:52.643 00:18:52.643 real 0m7.195s 00:18:52.643 user 0m14.474s 00:18:52.643 sys 0m2.133s 00:18:52.643 08:05:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:52.643 08:05:43 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:52.643 ************************************ 00:18:52.643 END TEST nvmf_bdevio 00:18:52.643 ************************************ 00:18:52.643 08:05:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:52.643 08:05:43 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:52.643 08:05:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:52.643 08:05:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:52.643 08:05:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:52.643 ************************************ 00:18:52.643 START TEST nvmf_auth_target 00:18:52.643 ************************************ 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:52.643 * Looking for test storage... 00:18:52.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:52.643 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:52.644 08:05:43 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:52.644 08:05:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:54.545 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:54.545 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:54.545 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:54.546 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:54.546 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:54.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:54.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:18:54.546 00:18:54.546 --- 10.0.0.2 ping statistics --- 00:18:54.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.546 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:54.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:54.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:18:54.546 00:18:54.546 --- 10.0.0.1 ping statistics --- 00:18:54.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:54.546 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1964658 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1964658 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1964658 ']' 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:54.546 08:05:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1964680 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=ce11f2c1062ae894415abd569e613c359f9a206f55d56b77 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.1V9 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key ce11f2c1062ae894415abd569e613c359f9a206f55d56b77 0 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 ce11f2c1062ae894415abd569e613c359f9a206f55d56b77 0 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=ce11f2c1062ae894415abd569e613c359f9a206f55d56b77 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.1V9 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.1V9 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.1V9 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.805 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0ab20047d57c9d329b2c76d00ca3a0aa1a9896c05b82baeba983e7fde2fe335a 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.gn3 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0ab20047d57c9d329b2c76d00ca3a0aa1a9896c05b82baeba983e7fde2fe335a 3 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0ab20047d57c9d329b2c76d00ca3a0aa1a9896c05b82baeba983e7fde2fe335a 3 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0ab20047d57c9d329b2c76d00ca3a0aa1a9896c05b82baeba983e7fde2fe335a 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.gn3 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.gn3 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.gn3 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a4950aee2ee4cb54f8a9ed5729182bf4 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.0Cx 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a4950aee2ee4cb54f8a9ed5729182bf4 1 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a4950aee2ee4cb54f8a9ed5729182bf4 1 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a4950aee2ee4cb54f8a9ed5729182bf4 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.0Cx 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.0Cx 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.0Cx 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=975f9e5c0f65cc6fa00f858c752fab414acf9ae5d41ffa8b 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.GXs 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 975f9e5c0f65cc6fa00f858c752fab414acf9ae5d41ffa8b 2 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 975f9e5c0f65cc6fa00f858c752fab414acf9ae5d41ffa8b 2 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=975f9e5c0f65cc6fa00f858c752fab414acf9ae5d41ffa8b 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.GXs 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.GXs 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.GXs 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:54.806 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=39c8c6a208d19e47e5e7a2cfee878f717b76f01d1d1ac8e2 00:18:55.096 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:55.096 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.h3D 00:18:55.096 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 39c8c6a208d19e47e5e7a2cfee878f717b76f01d1d1ac8e2 2 00:18:55.096 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 39c8c6a208d19e47e5e7a2cfee878f717b76f01d1d1ac8e2 2 00:18:55.096 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:55.096 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:55.096 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=39c8c6a208d19e47e5e7a2cfee878f717b76f01d1d1ac8e2 00:18:55.096 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:55.096 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:55.096 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.h3D 00:18:55.096 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.h3D 00:18:55.096 08:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.h3D 00:18:55.096 08:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a4249edf09069822ed360247123b3a6c 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8GK 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a4249edf09069822ed360247123b3a6c 1 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a4249edf09069822ed360247123b3a6c 1 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a4249edf09069822ed360247123b3a6c 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8GK 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8GK 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.8GK 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a8d2c4689a3e1cdf1fca9908d3e190c042eafe0a2420750a6326c27440613978 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.aBO 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a8d2c4689a3e1cdf1fca9908d3e190c042eafe0a2420750a6326c27440613978 3 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a8d2c4689a3e1cdf1fca9908d3e190c042eafe0a2420750a6326c27440613978 3 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a8d2c4689a3e1cdf1fca9908d3e190c042eafe0a2420750a6326c27440613978 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.aBO 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.aBO 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.aBO 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1964658 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1964658 ']' 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:55.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.097 08:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.355 08:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:55.355 08:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:55.355 08:05:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1964680 /var/tmp/host.sock 00:18:55.355 08:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1964680 ']' 00:18:55.355 08:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:18:55.355 08:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:55.355 08:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:55.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:55.355 08:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:55.355 08:05:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.613 08:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:55.613 08:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:55.613 08:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:55.613 08:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.613 08:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.613 08:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.613 08:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:55.613 08:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.1V9 00:18:55.613 08:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.613 08:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.613 08:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.613 08:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.1V9 00:18:55.613 08:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.1V9 00:18:55.871 08:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.gn3 ]] 00:18:55.871 08:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gn3 00:18:55.871 08:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.871 08:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.871 08:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.871 08:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gn3 00:18:55.871 08:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.gn3 00:18:56.129 08:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:56.129 08:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.0Cx 00:18:56.129 08:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.129 08:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.129 08:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.129 08:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.0Cx 00:18:56.129 08:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.0Cx 00:18:56.387 08:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.GXs ]] 00:18:56.387 08:05:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GXs 00:18:56.387 08:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.387 08:05:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.387 08:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.387 08:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GXs 00:18:56.387 08:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GXs 00:18:56.645 08:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:56.645 08:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.h3D 00:18:56.645 08:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.645 08:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.645 08:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.645 08:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.h3D 00:18:56.645 08:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.h3D 00:18:56.903 08:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.8GK ]] 00:18:56.903 08:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8GK 00:18:56.903 08:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.903 08:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.903 08:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.903 08:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8GK 00:18:56.904 08:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.8GK 00:18:57.161 08:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:57.161 08:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.aBO 00:18:57.161 08:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.161 08:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.161 08:05:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.161 08:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.aBO 00:18:57.161 08:05:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.aBO 00:18:57.419 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:57.419 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:57.419 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:57.419 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.419 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:57.419 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:57.677 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:57.677 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.677 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:57.677 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:57.677 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:57.677 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.677 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.677 08:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.677 08:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.677 08:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.677 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.677 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:57.935 00:18:57.935 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:57.935 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:57.935 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.193 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.193 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.193 08:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.193 08:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.193 08:05:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.193 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.193 { 00:18:58.193 "cntlid": 1, 00:18:58.193 "qid": 0, 00:18:58.193 "state": "enabled", 00:18:58.193 "thread": "nvmf_tgt_poll_group_000", 00:18:58.193 "listen_address": { 00:18:58.193 "trtype": "TCP", 00:18:58.193 "adrfam": "IPv4", 00:18:58.193 "traddr": "10.0.0.2", 00:18:58.193 "trsvcid": "4420" 00:18:58.193 }, 00:18:58.193 "peer_address": { 00:18:58.193 "trtype": "TCP", 00:18:58.193 "adrfam": "IPv4", 00:18:58.193 "traddr": "10.0.0.1", 00:18:58.193 "trsvcid": "47418" 00:18:58.193 }, 00:18:58.193 "auth": { 00:18:58.193 "state": "completed", 00:18:58.193 "digest": "sha256", 00:18:58.193 "dhgroup": "null" 00:18:58.193 } 00:18:58.193 } 00:18:58.193 ]' 00:18:58.193 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.193 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.193 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.194 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:58.194 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.451 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.452 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.452 08:05:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.709 08:05:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:Y2UxMWYyYzEwNjJhZTg5NDQxNWFiZDU2OWU2MTNjMzU5ZjlhMjA2ZjU1ZDU2Yjc3INMyEQ==: --dhchap-ctrl-secret DHHC-1:03:MGFiMjAwNDdkNTdjOWQzMjliMmM3NmQwMGNhM2EwYWExYTk4OTZjMDViODJiYWViYTk4M2U3ZmRlMmZlMzM1YVwGEV4=: 00:18:59.641 08:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.641 08:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.641 08:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.641 08:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.641 08:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.642 08:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.642 08:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:59.642 08:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:59.899 08:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:59.899 08:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.899 08:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:59.899 08:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:59.899 08:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:59.899 08:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.900 08:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.900 08:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.900 08:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.900 08:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.900 08:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:59.900 08:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:00.161 00:19:00.161 08:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.161 08:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:00.161 08:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.418 08:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:00.418 08:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:00.418 08:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:00.418 08:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.418 08:05:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:00.418 08:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:00.418 { 00:19:00.418 "cntlid": 3, 00:19:00.418 "qid": 0, 00:19:00.418 "state": "enabled", 00:19:00.418 "thread": "nvmf_tgt_poll_group_000", 00:19:00.418 "listen_address": { 00:19:00.418 "trtype": "TCP", 00:19:00.418 "adrfam": "IPv4", 00:19:00.418 "traddr": "10.0.0.2", 00:19:00.418 "trsvcid": "4420" 00:19:00.418 }, 00:19:00.418 "peer_address": { 00:19:00.418 "trtype": "TCP", 00:19:00.418 "adrfam": "IPv4", 00:19:00.418 "traddr": "10.0.0.1", 00:19:00.418 "trsvcid": "47452" 00:19:00.418 }, 00:19:00.418 "auth": { 00:19:00.418 "state": "completed", 00:19:00.418 "digest": "sha256", 00:19:00.418 "dhgroup": "null" 00:19:00.418 } 00:19:00.418 } 00:19:00.418 ]' 00:19:00.418 08:05:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:00.418 08:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:00.418 08:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.418 08:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:00.418 08:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.418 08:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.418 08:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.418 08:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.674 08:05:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTQ5NTBhZWUyZWU0Y2I1NGY4YTllZDU3MjkxODJiZjRSO1eA: --dhchap-ctrl-secret DHHC-1:02:OTc1ZjllNWMwZjY1Y2M2ZmEwMGY4NThjNzUyZmFiNDE0YWNmOWFlNWQ0MWZmYThinZpsJw==: 00:19:01.607 08:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.607 08:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:01.607 08:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.607 08:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.607 08:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.607 08:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.607 08:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:01.607 08:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:01.865 08:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:01.865 08:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.865 08:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:01.865 08:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:01.865 08:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:01.865 08:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.865 08:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.865 08:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.865 08:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.865 08:05:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.865 08:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:01.865 08:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.431 00:19:02.431 08:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:02.431 08:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:02.431 08:05:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:02.431 08:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:02.431 08:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:02.431 08:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.431 08:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.431 08:05:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.431 08:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:02.431 { 00:19:02.431 "cntlid": 5, 00:19:02.431 "qid": 0, 00:19:02.431 "state": "enabled", 00:19:02.431 "thread": "nvmf_tgt_poll_group_000", 00:19:02.431 "listen_address": { 00:19:02.431 "trtype": "TCP", 00:19:02.431 "adrfam": "IPv4", 00:19:02.431 "traddr": "10.0.0.2", 00:19:02.431 "trsvcid": "4420" 00:19:02.431 }, 00:19:02.431 "peer_address": { 00:19:02.431 "trtype": "TCP", 00:19:02.431 "adrfam": "IPv4", 00:19:02.431 "traddr": "10.0.0.1", 00:19:02.431 "trsvcid": "47482" 00:19:02.431 }, 00:19:02.431 "auth": { 00:19:02.431 "state": "completed", 00:19:02.431 "digest": "sha256", 00:19:02.431 "dhgroup": "null" 00:19:02.431 } 00:19:02.431 } 00:19:02.431 ]' 00:19:02.431 08:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:02.431 08:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:02.689 08:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:02.689 08:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:02.689 08:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:02.689 08:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:02.689 08:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:02.689 08:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.946 08:05:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzljOGM2YTIwOGQxOWU0N2U1ZTdhMmNmZWU4NzhmNzE3Yjc2ZjAxZDFkMWFjOGUynRyBfA==: --dhchap-ctrl-secret DHHC-1:01:YTQyNDllZGYwOTA2OTgyMmVkMzYwMjQ3MTIzYjNhNmPyVN1I: 00:19:03.877 08:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.877 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.877 08:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:03.877 08:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.877 08:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.877 08:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.877 08:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.877 08:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:03.877 08:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:04.133 08:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:04.133 08:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.133 08:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:04.133 08:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:04.133 08:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:04.133 08:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.133 08:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:04.133 08:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.133 08:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.133 08:05:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.133 08:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.133 08:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.389 00:19:04.389 08:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.389 08:05:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.389 08:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.646 08:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.646 08:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.646 08:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.646 08:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.646 08:05:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.646 08:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.646 { 00:19:04.646 "cntlid": 7, 00:19:04.646 "qid": 0, 00:19:04.646 "state": "enabled", 00:19:04.646 "thread": "nvmf_tgt_poll_group_000", 00:19:04.646 "listen_address": { 00:19:04.646 "trtype": "TCP", 00:19:04.646 "adrfam": "IPv4", 00:19:04.646 "traddr": "10.0.0.2", 00:19:04.646 "trsvcid": "4420" 00:19:04.646 }, 00:19:04.646 "peer_address": { 00:19:04.646 "trtype": "TCP", 00:19:04.646 "adrfam": "IPv4", 00:19:04.646 "traddr": "10.0.0.1", 00:19:04.646 "trsvcid": "47494" 00:19:04.646 }, 00:19:04.646 "auth": { 00:19:04.646 "state": "completed", 00:19:04.646 "digest": "sha256", 00:19:04.646 "dhgroup": "null" 00:19:04.646 } 00:19:04.646 } 00:19:04.646 ]' 00:19:04.646 08:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.646 08:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:04.646 08:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.646 08:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:04.646 08:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.646 08:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.646 08:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.646 08:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.902 08:05:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkMmM0Njg5YTNlMWNkZjFmY2E5OTA4ZDNlMTkwYzA0MmVhZmUwYTI0MjA3NTBhNjMyNmMyNzQ0MDYxMzk3OGm+Ivs=: 00:19:05.862 08:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.862 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.862 08:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:05.862 08:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.862 08:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.862 08:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.862 08:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.862 08:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.862 08:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:05.862 08:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:06.119 08:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:06.119 08:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.119 08:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:06.119 08:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:06.119 08:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:06.119 08:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.119 08:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.119 08:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.119 08:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.119 08:05:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.119 08:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.119 08:05:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.682 00:19:06.682 08:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.682 08:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.682 08:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.682 08:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.682 08:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.682 08:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.682 08:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.682 08:05:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.682 08:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.682 { 00:19:06.682 "cntlid": 9, 00:19:06.682 "qid": 0, 00:19:06.683 "state": "enabled", 00:19:06.683 "thread": "nvmf_tgt_poll_group_000", 00:19:06.683 "listen_address": { 00:19:06.683 "trtype": "TCP", 00:19:06.683 "adrfam": "IPv4", 00:19:06.683 "traddr": "10.0.0.2", 00:19:06.683 "trsvcid": "4420" 00:19:06.683 }, 00:19:06.683 "peer_address": { 00:19:06.683 "trtype": "TCP", 00:19:06.683 "adrfam": "IPv4", 00:19:06.683 "traddr": "10.0.0.1", 00:19:06.683 "trsvcid": "36266" 00:19:06.683 }, 00:19:06.683 "auth": { 00:19:06.683 "state": "completed", 00:19:06.683 "digest": "sha256", 00:19:06.683 "dhgroup": "ffdhe2048" 00:19:06.683 } 00:19:06.683 } 00:19:06.683 ]' 00:19:06.683 08:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.939 08:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:06.939 08:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.939 08:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:06.939 08:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.939 08:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.939 08:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.939 08:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.197 08:05:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:Y2UxMWYyYzEwNjJhZTg5NDQxNWFiZDU2OWU2MTNjMzU5ZjlhMjA2ZjU1ZDU2Yjc3INMyEQ==: --dhchap-ctrl-secret DHHC-1:03:MGFiMjAwNDdkNTdjOWQzMjliMmM3NmQwMGNhM2EwYWExYTk4OTZjMDViODJiYWViYTk4M2U3ZmRlMmZlMzM1YVwGEV4=: 00:19:08.132 08:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.132 08:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:08.132 08:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.132 08:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.132 08:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.132 08:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.132 08:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:08.132 08:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:08.389 08:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:08.390 08:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.390 08:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:08.390 08:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:08.390 08:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:08.390 08:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.390 08:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.390 08:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.390 08:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.390 08:05:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.390 08:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.390 08:05:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:08.647 00:19:08.647 08:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:08.647 08:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:08.647 08:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.904 08:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.904 08:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.904 08:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.904 08:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.904 08:06:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.904 08:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.904 { 00:19:08.904 "cntlid": 11, 00:19:08.904 "qid": 0, 00:19:08.904 "state": "enabled", 00:19:08.904 "thread": "nvmf_tgt_poll_group_000", 00:19:08.904 "listen_address": { 00:19:08.904 "trtype": "TCP", 00:19:08.904 "adrfam": "IPv4", 00:19:08.904 "traddr": "10.0.0.2", 00:19:08.904 "trsvcid": "4420" 00:19:08.904 }, 00:19:08.904 "peer_address": { 00:19:08.904 "trtype": "TCP", 00:19:08.904 "adrfam": "IPv4", 00:19:08.904 "traddr": "10.0.0.1", 00:19:08.904 "trsvcid": "36282" 00:19:08.904 }, 00:19:08.904 "auth": { 00:19:08.904 "state": "completed", 00:19:08.904 "digest": "sha256", 00:19:08.904 "dhgroup": "ffdhe2048" 00:19:08.904 } 00:19:08.904 } 00:19:08.904 ]' 00:19:08.904 08:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.904 08:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:08.904 08:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.162 08:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:09.162 08:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.162 08:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.162 08:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.162 08:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.419 08:06:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTQ5NTBhZWUyZWU0Y2I1NGY4YTllZDU3MjkxODJiZjRSO1eA: --dhchap-ctrl-secret DHHC-1:02:OTc1ZjllNWMwZjY1Y2M2ZmEwMGY4NThjNzUyZmFiNDE0YWNmOWFlNWQ0MWZmYThinZpsJw==: 00:19:10.352 08:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.352 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.352 08:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.352 08:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.352 08:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.352 08:06:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.352 08:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.352 08:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.352 08:06:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:10.610 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:10.610 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.610 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:10.610 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:10.610 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:10.610 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.610 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.610 08:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.610 08:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.610 08:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.610 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.610 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:10.868 00:19:10.868 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:10.868 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:10.868 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.125 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.125 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.125 08:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.125 08:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.126 08:06:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.126 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.126 { 00:19:11.126 "cntlid": 13, 00:19:11.126 "qid": 0, 00:19:11.126 "state": "enabled", 00:19:11.126 "thread": "nvmf_tgt_poll_group_000", 00:19:11.126 "listen_address": { 00:19:11.126 "trtype": "TCP", 00:19:11.126 "adrfam": "IPv4", 00:19:11.126 "traddr": "10.0.0.2", 00:19:11.126 "trsvcid": "4420" 00:19:11.126 }, 00:19:11.126 "peer_address": { 00:19:11.126 "trtype": "TCP", 00:19:11.126 "adrfam": "IPv4", 00:19:11.126 "traddr": "10.0.0.1", 00:19:11.126 "trsvcid": "36318" 00:19:11.126 }, 00:19:11.126 "auth": { 00:19:11.126 "state": "completed", 00:19:11.126 "digest": "sha256", 00:19:11.126 "dhgroup": "ffdhe2048" 00:19:11.126 } 00:19:11.126 } 00:19:11.126 ]' 00:19:11.126 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.126 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:11.126 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.126 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:11.126 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.126 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.126 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.126 08:06:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.382 08:06:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzljOGM2YTIwOGQxOWU0N2U1ZTdhMmNmZWU4NzhmNzE3Yjc2ZjAxZDFkMWFjOGUynRyBfA==: --dhchap-ctrl-secret DHHC-1:01:YTQyNDllZGYwOTA2OTgyMmVkMzYwMjQ3MTIzYjNhNmPyVN1I: 00:19:12.315 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.315 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.315 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:12.315 08:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.315 08:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.573 08:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.573 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.573 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:12.573 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:12.573 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:12.573 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.573 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:12.573 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:12.573 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:12.573 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.573 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:12.573 08:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.573 08:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.573 08:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.573 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.574 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:13.138 00:19:13.138 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:13.138 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:13.138 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:13.395 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:13.395 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:13.395 08:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.395 08:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.395 08:06:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.395 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:13.395 { 00:19:13.395 "cntlid": 15, 00:19:13.395 "qid": 0, 00:19:13.395 "state": "enabled", 00:19:13.395 "thread": "nvmf_tgt_poll_group_000", 00:19:13.395 "listen_address": { 00:19:13.395 "trtype": "TCP", 00:19:13.395 "adrfam": "IPv4", 00:19:13.395 "traddr": "10.0.0.2", 00:19:13.395 "trsvcid": "4420" 00:19:13.395 }, 00:19:13.395 "peer_address": { 00:19:13.395 "trtype": "TCP", 00:19:13.395 "adrfam": "IPv4", 00:19:13.395 "traddr": "10.0.0.1", 00:19:13.395 "trsvcid": "36360" 00:19:13.395 }, 00:19:13.396 "auth": { 00:19:13.396 "state": "completed", 00:19:13.396 "digest": "sha256", 00:19:13.396 "dhgroup": "ffdhe2048" 00:19:13.396 } 00:19:13.396 } 00:19:13.396 ]' 00:19:13.396 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:13.396 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:13.396 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:13.396 08:06:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:13.396 08:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.396 08:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.396 08:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.396 08:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.652 08:06:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkMmM0Njg5YTNlMWNkZjFmY2E5OTA4ZDNlMTkwYzA0MmVhZmUwYTI0MjA3NTBhNjMyNmMyNzQ0MDYxMzk3OGm+Ivs=: 00:19:14.585 08:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.585 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.585 08:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:14.585 08:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.585 08:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.585 08:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.585 08:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.585 08:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.585 08:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:14.585 08:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:14.843 08:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:14.843 08:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.843 08:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:14.843 08:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:14.843 08:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:14.843 08:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.843 08:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.843 08:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.843 08:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.843 08:06:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.843 08:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.843 08:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.409 00:19:15.409 08:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.409 08:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.409 08:06:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.668 08:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.668 08:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.668 08:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.668 08:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.668 08:06:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.668 08:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.668 { 00:19:15.668 "cntlid": 17, 00:19:15.668 "qid": 0, 00:19:15.668 "state": "enabled", 00:19:15.668 "thread": "nvmf_tgt_poll_group_000", 00:19:15.668 "listen_address": { 00:19:15.668 "trtype": "TCP", 00:19:15.668 "adrfam": "IPv4", 00:19:15.668 "traddr": "10.0.0.2", 00:19:15.668 "trsvcid": "4420" 00:19:15.668 }, 00:19:15.668 "peer_address": { 00:19:15.668 "trtype": "TCP", 00:19:15.668 "adrfam": "IPv4", 00:19:15.668 "traddr": "10.0.0.1", 00:19:15.668 "trsvcid": "36386" 00:19:15.668 }, 00:19:15.668 "auth": { 00:19:15.668 "state": "completed", 00:19:15.668 "digest": "sha256", 00:19:15.668 "dhgroup": "ffdhe3072" 00:19:15.668 } 00:19:15.668 } 00:19:15.668 ]' 00:19:15.668 08:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.668 08:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:15.668 08:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.668 08:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:15.668 08:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.668 08:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.668 08:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.668 08:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.926 08:06:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:Y2UxMWYyYzEwNjJhZTg5NDQxNWFiZDU2OWU2MTNjMzU5ZjlhMjA2ZjU1ZDU2Yjc3INMyEQ==: --dhchap-ctrl-secret DHHC-1:03:MGFiMjAwNDdkNTdjOWQzMjliMmM3NmQwMGNhM2EwYWExYTk4OTZjMDViODJiYWViYTk4M2U3ZmRlMmZlMzM1YVwGEV4=: 00:19:16.861 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.861 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.861 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:16.861 08:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.861 08:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.861 08:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.861 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.861 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:16.861 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:17.119 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:17.119 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.119 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:17.119 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:17.119 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:17.119 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.119 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.119 08:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.119 08:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.119 08:06:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.119 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.119 08:06:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.685 00:19:17.685 08:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.685 08:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.685 08:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.685 08:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.685 08:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.685 08:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.685 08:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.685 08:06:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.685 08:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.685 { 00:19:17.685 "cntlid": 19, 00:19:17.685 "qid": 0, 00:19:17.685 "state": "enabled", 00:19:17.685 "thread": "nvmf_tgt_poll_group_000", 00:19:17.685 "listen_address": { 00:19:17.685 "trtype": "TCP", 00:19:17.685 "adrfam": "IPv4", 00:19:17.685 "traddr": "10.0.0.2", 00:19:17.685 "trsvcid": "4420" 00:19:17.685 }, 00:19:17.685 "peer_address": { 00:19:17.685 "trtype": "TCP", 00:19:17.685 "adrfam": "IPv4", 00:19:17.685 "traddr": "10.0.0.1", 00:19:17.685 "trsvcid": "35552" 00:19:17.685 }, 00:19:17.685 "auth": { 00:19:17.685 "state": "completed", 00:19:17.685 "digest": "sha256", 00:19:17.685 "dhgroup": "ffdhe3072" 00:19:17.685 } 00:19:17.685 } 00:19:17.685 ]' 00:19:17.685 08:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.943 08:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:17.943 08:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.943 08:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:17.943 08:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.943 08:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.943 08:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.943 08:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.201 08:06:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTQ5NTBhZWUyZWU0Y2I1NGY4YTllZDU3MjkxODJiZjRSO1eA: --dhchap-ctrl-secret DHHC-1:02:OTc1ZjllNWMwZjY1Y2M2ZmEwMGY4NThjNzUyZmFiNDE0YWNmOWFlNWQ0MWZmYThinZpsJw==: 00:19:19.133 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.133 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.133 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:19.133 08:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.133 08:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.133 08:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.133 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:19.133 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:19.133 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:19.390 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:19.390 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:19.390 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:19.390 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:19.390 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:19.390 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.390 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.390 08:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.390 08:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.390 08:06:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.390 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.390 08:06:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:19.647 00:19:19.647 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.647 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.647 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.910 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.910 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.910 08:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.910 08:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.910 08:06:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.910 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.910 { 00:19:19.910 "cntlid": 21, 00:19:19.910 "qid": 0, 00:19:19.910 "state": "enabled", 00:19:19.910 "thread": "nvmf_tgt_poll_group_000", 00:19:19.910 "listen_address": { 00:19:19.910 "trtype": "TCP", 00:19:19.910 "adrfam": "IPv4", 00:19:19.910 "traddr": "10.0.0.2", 00:19:19.910 "trsvcid": "4420" 00:19:19.910 }, 00:19:19.910 "peer_address": { 00:19:19.910 "trtype": "TCP", 00:19:19.910 "adrfam": "IPv4", 00:19:19.910 "traddr": "10.0.0.1", 00:19:19.910 "trsvcid": "35574" 00:19:19.910 }, 00:19:19.910 "auth": { 00:19:19.910 "state": "completed", 00:19:19.910 "digest": "sha256", 00:19:19.910 "dhgroup": "ffdhe3072" 00:19:19.910 } 00:19:19.910 } 00:19:19.910 ]' 00:19:19.910 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.185 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:20.185 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.185 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:20.185 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.185 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.185 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.185 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.441 08:06:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzljOGM2YTIwOGQxOWU0N2U1ZTdhMmNmZWU4NzhmNzE3Yjc2ZjAxZDFkMWFjOGUynRyBfA==: --dhchap-ctrl-secret DHHC-1:01:YTQyNDllZGYwOTA2OTgyMmVkMzYwMjQ3MTIzYjNhNmPyVN1I: 00:19:21.373 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.373 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:21.373 08:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.373 08:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.373 08:06:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.373 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.373 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:21.373 08:06:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:21.631 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:21.631 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.631 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:21.631 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:21.631 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:21.631 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.631 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:21.631 08:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.631 08:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.631 08:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.631 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.631 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:21.888 00:19:21.888 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:21.888 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:21.888 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.145 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.145 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.145 08:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.145 08:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.145 08:06:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.145 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.145 { 00:19:22.145 "cntlid": 23, 00:19:22.145 "qid": 0, 00:19:22.145 "state": "enabled", 00:19:22.145 "thread": "nvmf_tgt_poll_group_000", 00:19:22.145 "listen_address": { 00:19:22.145 "trtype": "TCP", 00:19:22.145 "adrfam": "IPv4", 00:19:22.145 "traddr": "10.0.0.2", 00:19:22.145 "trsvcid": "4420" 00:19:22.145 }, 00:19:22.145 "peer_address": { 00:19:22.145 "trtype": "TCP", 00:19:22.145 "adrfam": "IPv4", 00:19:22.145 "traddr": "10.0.0.1", 00:19:22.145 "trsvcid": "35608" 00:19:22.145 }, 00:19:22.145 "auth": { 00:19:22.145 "state": "completed", 00:19:22.145 "digest": "sha256", 00:19:22.145 "dhgroup": "ffdhe3072" 00:19:22.145 } 00:19:22.145 } 00:19:22.145 ]' 00:19:22.145 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.145 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.145 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.145 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:22.145 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.145 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.145 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.145 08:06:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.402 08:06:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkMmM0Njg5YTNlMWNkZjFmY2E5OTA4ZDNlMTkwYzA0MmVhZmUwYTI0MjA3NTBhNjMyNmMyNzQ0MDYxMzk3OGm+Ivs=: 00:19:23.334 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.334 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.334 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:23.334 08:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.334 08:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.334 08:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.334 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:23.334 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.334 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:23.334 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:23.897 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:23.897 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.897 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:23.897 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:23.897 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:23.897 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.897 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.897 08:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.897 08:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.897 08:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.897 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:23.897 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:24.153 00:19:24.153 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.153 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.153 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.410 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.410 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.410 08:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.410 08:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.410 08:06:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.410 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.410 { 00:19:24.410 "cntlid": 25, 00:19:24.410 "qid": 0, 00:19:24.410 "state": "enabled", 00:19:24.410 "thread": "nvmf_tgt_poll_group_000", 00:19:24.410 "listen_address": { 00:19:24.410 "trtype": "TCP", 00:19:24.410 "adrfam": "IPv4", 00:19:24.410 "traddr": "10.0.0.2", 00:19:24.410 "trsvcid": "4420" 00:19:24.410 }, 00:19:24.410 "peer_address": { 00:19:24.410 "trtype": "TCP", 00:19:24.410 "adrfam": "IPv4", 00:19:24.410 "traddr": "10.0.0.1", 00:19:24.410 "trsvcid": "35646" 00:19:24.410 }, 00:19:24.410 "auth": { 00:19:24.410 "state": "completed", 00:19:24.410 "digest": "sha256", 00:19:24.410 "dhgroup": "ffdhe4096" 00:19:24.410 } 00:19:24.410 } 00:19:24.410 ]' 00:19:24.410 08:06:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.410 08:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:24.410 08:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.410 08:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:24.410 08:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.411 08:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.411 08:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.411 08:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.667 08:06:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:Y2UxMWYyYzEwNjJhZTg5NDQxNWFiZDU2OWU2MTNjMzU5ZjlhMjA2ZjU1ZDU2Yjc3INMyEQ==: --dhchap-ctrl-secret DHHC-1:03:MGFiMjAwNDdkNTdjOWQzMjliMmM3NmQwMGNhM2EwYWExYTk4OTZjMDViODJiYWViYTk4M2U3ZmRlMmZlMzM1YVwGEV4=: 00:19:25.599 08:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.599 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.599 08:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:25.599 08:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.599 08:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.599 08:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.599 08:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.599 08:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:25.599 08:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:26.163 08:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:26.163 08:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.163 08:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.163 08:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:26.163 08:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:26.163 08:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.163 08:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.163 08:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.163 08:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.163 08:06:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.163 08:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.163 08:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:26.421 00:19:26.421 08:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.421 08:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.421 08:06:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.679 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.679 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.679 08:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.679 08:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.679 08:06:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.679 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.679 { 00:19:26.679 "cntlid": 27, 00:19:26.679 "qid": 0, 00:19:26.679 "state": "enabled", 00:19:26.679 "thread": "nvmf_tgt_poll_group_000", 00:19:26.679 "listen_address": { 00:19:26.679 "trtype": "TCP", 00:19:26.679 "adrfam": "IPv4", 00:19:26.679 "traddr": "10.0.0.2", 00:19:26.679 "trsvcid": "4420" 00:19:26.679 }, 00:19:26.679 "peer_address": { 00:19:26.679 "trtype": "TCP", 00:19:26.679 "adrfam": "IPv4", 00:19:26.679 "traddr": "10.0.0.1", 00:19:26.679 "trsvcid": "34298" 00:19:26.679 }, 00:19:26.679 "auth": { 00:19:26.679 "state": "completed", 00:19:26.679 "digest": "sha256", 00:19:26.679 "dhgroup": "ffdhe4096" 00:19:26.679 } 00:19:26.679 } 00:19:26.679 ]' 00:19:26.679 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.679 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:26.679 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.679 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:26.679 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.679 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.679 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.679 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.937 08:06:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTQ5NTBhZWUyZWU0Y2I1NGY4YTllZDU3MjkxODJiZjRSO1eA: --dhchap-ctrl-secret DHHC-1:02:OTc1ZjllNWMwZjY1Y2M2ZmEwMGY4NThjNzUyZmFiNDE0YWNmOWFlNWQ0MWZmYThinZpsJw==: 00:19:27.868 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.868 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.868 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:27.868 08:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.868 08:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.868 08:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.868 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.868 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:27.868 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:28.126 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:28.126 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.126 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:28.126 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:28.126 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:28.126 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.126 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.126 08:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.126 08:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.126 08:06:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.126 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.126 08:06:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.691 00:19:28.691 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:28.691 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:28.691 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:28.948 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:28.948 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:28.949 08:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.949 08:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.949 08:06:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.949 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:28.949 { 00:19:28.949 "cntlid": 29, 00:19:28.949 "qid": 0, 00:19:28.949 "state": "enabled", 00:19:28.949 "thread": "nvmf_tgt_poll_group_000", 00:19:28.949 "listen_address": { 00:19:28.949 "trtype": "TCP", 00:19:28.949 "adrfam": "IPv4", 00:19:28.949 "traddr": "10.0.0.2", 00:19:28.949 "trsvcid": "4420" 00:19:28.949 }, 00:19:28.949 "peer_address": { 00:19:28.949 "trtype": "TCP", 00:19:28.949 "adrfam": "IPv4", 00:19:28.949 "traddr": "10.0.0.1", 00:19:28.949 "trsvcid": "34334" 00:19:28.949 }, 00:19:28.949 "auth": { 00:19:28.949 "state": "completed", 00:19:28.949 "digest": "sha256", 00:19:28.949 "dhgroup": "ffdhe4096" 00:19:28.949 } 00:19:28.949 } 00:19:28.949 ]' 00:19:28.949 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:28.949 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:28.949 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:28.949 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:28.949 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:28.949 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:28.949 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:28.949 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.206 08:06:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzljOGM2YTIwOGQxOWU0N2U1ZTdhMmNmZWU4NzhmNzE3Yjc2ZjAxZDFkMWFjOGUynRyBfA==: --dhchap-ctrl-secret DHHC-1:01:YTQyNDllZGYwOTA2OTgyMmVkMzYwMjQ3MTIzYjNhNmPyVN1I: 00:19:30.138 08:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.396 08:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.396 08:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.396 08:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.396 08:06:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.396 08:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.396 08:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:30.396 08:06:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:30.653 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:30.653 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.653 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:30.653 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:30.653 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.653 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.653 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:30.653 08:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.653 08:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.653 08:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.653 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.653 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.910 00:19:30.910 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.910 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.910 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.167 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.167 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.168 08:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.168 08:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.168 08:06:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.168 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.168 { 00:19:31.168 "cntlid": 31, 00:19:31.168 "qid": 0, 00:19:31.168 "state": "enabled", 00:19:31.168 "thread": "nvmf_tgt_poll_group_000", 00:19:31.168 "listen_address": { 00:19:31.168 "trtype": "TCP", 00:19:31.168 "adrfam": "IPv4", 00:19:31.168 "traddr": "10.0.0.2", 00:19:31.168 "trsvcid": "4420" 00:19:31.168 }, 00:19:31.168 "peer_address": { 00:19:31.168 "trtype": "TCP", 00:19:31.168 "adrfam": "IPv4", 00:19:31.168 "traddr": "10.0.0.1", 00:19:31.168 "trsvcid": "34370" 00:19:31.168 }, 00:19:31.168 "auth": { 00:19:31.168 "state": "completed", 00:19:31.168 "digest": "sha256", 00:19:31.168 "dhgroup": "ffdhe4096" 00:19:31.168 } 00:19:31.168 } 00:19:31.168 ]' 00:19:31.168 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.168 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.168 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.168 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:31.168 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.424 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.424 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.424 08:06:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.680 08:06:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkMmM0Njg5YTNlMWNkZjFmY2E5OTA4ZDNlMTkwYzA0MmVhZmUwYTI0MjA3NTBhNjMyNmMyNzQ0MDYxMzk3OGm+Ivs=: 00:19:32.615 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.615 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:32.615 08:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.615 08:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.615 08:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.615 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.615 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.615 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:32.615 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:32.873 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:32.873 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.873 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:32.873 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:32.873 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:32.873 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.873 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.873 08:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.873 08:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.873 08:06:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.873 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.873 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:33.439 00:19:33.439 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.439 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.439 08:06:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.697 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.697 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.697 08:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.697 08:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.697 08:06:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.697 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.697 { 00:19:33.697 "cntlid": 33, 00:19:33.697 "qid": 0, 00:19:33.697 "state": "enabled", 00:19:33.697 "thread": "nvmf_tgt_poll_group_000", 00:19:33.697 "listen_address": { 00:19:33.697 "trtype": "TCP", 00:19:33.697 "adrfam": "IPv4", 00:19:33.697 "traddr": "10.0.0.2", 00:19:33.697 "trsvcid": "4420" 00:19:33.697 }, 00:19:33.697 "peer_address": { 00:19:33.697 "trtype": "TCP", 00:19:33.697 "adrfam": "IPv4", 00:19:33.697 "traddr": "10.0.0.1", 00:19:33.697 "trsvcid": "34386" 00:19:33.697 }, 00:19:33.697 "auth": { 00:19:33.697 "state": "completed", 00:19:33.697 "digest": "sha256", 00:19:33.697 "dhgroup": "ffdhe6144" 00:19:33.697 } 00:19:33.697 } 00:19:33.697 ]' 00:19:33.697 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.697 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:33.697 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.697 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:33.697 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.697 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.697 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.697 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.955 08:06:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:Y2UxMWYyYzEwNjJhZTg5NDQxNWFiZDU2OWU2MTNjMzU5ZjlhMjA2ZjU1ZDU2Yjc3INMyEQ==: --dhchap-ctrl-secret DHHC-1:03:MGFiMjAwNDdkNTdjOWQzMjliMmM3NmQwMGNhM2EwYWExYTk4OTZjMDViODJiYWViYTk4M2U3ZmRlMmZlMzM1YVwGEV4=: 00:19:34.890 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.890 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.890 08:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.890 08:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.890 08:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.890 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.890 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:34.890 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:35.149 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:19:35.149 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.149 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.149 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:35.149 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:35.149 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.149 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.149 08:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.149 08:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.149 08:06:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.149 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.149 08:06:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:35.714 00:19:35.714 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.714 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.714 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.972 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.972 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.972 08:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.972 08:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.972 08:06:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.972 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.972 { 00:19:35.972 "cntlid": 35, 00:19:35.972 "qid": 0, 00:19:35.972 "state": "enabled", 00:19:35.972 "thread": "nvmf_tgt_poll_group_000", 00:19:35.972 "listen_address": { 00:19:35.972 "trtype": "TCP", 00:19:35.972 "adrfam": "IPv4", 00:19:35.972 "traddr": "10.0.0.2", 00:19:35.972 "trsvcid": "4420" 00:19:35.972 }, 00:19:35.972 "peer_address": { 00:19:35.972 "trtype": "TCP", 00:19:35.972 "adrfam": "IPv4", 00:19:35.972 "traddr": "10.0.0.1", 00:19:35.972 "trsvcid": "34418" 00:19:35.972 }, 00:19:35.972 "auth": { 00:19:35.972 "state": "completed", 00:19:35.972 "digest": "sha256", 00:19:35.972 "dhgroup": "ffdhe6144" 00:19:35.972 } 00:19:35.972 } 00:19:35.972 ]' 00:19:35.972 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.972 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:35.972 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.972 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.973 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.230 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.230 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.230 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.488 08:06:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTQ5NTBhZWUyZWU0Y2I1NGY4YTllZDU3MjkxODJiZjRSO1eA: --dhchap-ctrl-secret DHHC-1:02:OTc1ZjllNWMwZjY1Y2M2ZmEwMGY4NThjNzUyZmFiNDE0YWNmOWFlNWQ0MWZmYThinZpsJw==: 00:19:37.421 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.421 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.421 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.421 08:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.421 08:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.421 08:06:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.421 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.421 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:37.421 08:06:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:37.679 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:19:37.679 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.679 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:37.679 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:37.679 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:37.679 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.679 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.679 08:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.679 08:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.679 08:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.680 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:37.680 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:38.246 00:19:38.246 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.246 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.246 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.246 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.246 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.246 08:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.246 08:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.504 08:06:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.504 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.504 { 00:19:38.504 "cntlid": 37, 00:19:38.504 "qid": 0, 00:19:38.504 "state": "enabled", 00:19:38.504 "thread": "nvmf_tgt_poll_group_000", 00:19:38.504 "listen_address": { 00:19:38.504 "trtype": "TCP", 00:19:38.504 "adrfam": "IPv4", 00:19:38.504 "traddr": "10.0.0.2", 00:19:38.504 "trsvcid": "4420" 00:19:38.504 }, 00:19:38.504 "peer_address": { 00:19:38.504 "trtype": "TCP", 00:19:38.504 "adrfam": "IPv4", 00:19:38.504 "traddr": "10.0.0.1", 00:19:38.504 "trsvcid": "45358" 00:19:38.504 }, 00:19:38.504 "auth": { 00:19:38.504 "state": "completed", 00:19:38.504 "digest": "sha256", 00:19:38.504 "dhgroup": "ffdhe6144" 00:19:38.504 } 00:19:38.504 } 00:19:38.504 ]' 00:19:38.504 08:06:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.504 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.504 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.504 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:38.504 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.504 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.504 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.504 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.762 08:06:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzljOGM2YTIwOGQxOWU0N2U1ZTdhMmNmZWU4NzhmNzE3Yjc2ZjAxZDFkMWFjOGUynRyBfA==: --dhchap-ctrl-secret DHHC-1:01:YTQyNDllZGYwOTA2OTgyMmVkMzYwMjQ3MTIzYjNhNmPyVN1I: 00:19:39.697 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.697 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.697 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.697 08:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.697 08:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.697 08:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.697 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.697 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:39.697 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:39.955 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:19:39.955 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.955 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:39.956 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:39.956 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:39.956 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.956 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:39.956 08:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.956 08:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.956 08:06:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.956 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:39.956 08:06:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:40.521 00:19:40.521 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.521 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.521 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.779 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.779 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.779 08:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.779 08:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.779 08:06:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.779 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.779 { 00:19:40.779 "cntlid": 39, 00:19:40.779 "qid": 0, 00:19:40.779 "state": "enabled", 00:19:40.779 "thread": "nvmf_tgt_poll_group_000", 00:19:40.779 "listen_address": { 00:19:40.779 "trtype": "TCP", 00:19:40.779 "adrfam": "IPv4", 00:19:40.779 "traddr": "10.0.0.2", 00:19:40.779 "trsvcid": "4420" 00:19:40.779 }, 00:19:40.779 "peer_address": { 00:19:40.779 "trtype": "TCP", 00:19:40.779 "adrfam": "IPv4", 00:19:40.779 "traddr": "10.0.0.1", 00:19:40.779 "trsvcid": "45386" 00:19:40.779 }, 00:19:40.779 "auth": { 00:19:40.779 "state": "completed", 00:19:40.779 "digest": "sha256", 00:19:40.779 "dhgroup": "ffdhe6144" 00:19:40.779 } 00:19:40.779 } 00:19:40.779 ]' 00:19:40.779 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.779 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.779 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.779 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:40.779 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.037 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.037 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.037 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:41.295 08:06:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkMmM0Njg5YTNlMWNkZjFmY2E5OTA4ZDNlMTkwYzA0MmVhZmUwYTI0MjA3NTBhNjMyNmMyNzQ0MDYxMzk3OGm+Ivs=: 00:19:42.226 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.226 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:42.226 08:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.226 08:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.226 08:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.226 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:42.226 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.226 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:42.226 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:42.482 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:19:42.482 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.482 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:42.483 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:42.483 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:42.483 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.483 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.483 08:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.483 08:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.483 08:06:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.483 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:42.483 08:06:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:43.414 00:19:43.414 08:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.414 08:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.414 08:06:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.672 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.672 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.672 08:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.672 08:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.672 08:06:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.672 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.672 { 00:19:43.672 "cntlid": 41, 00:19:43.672 "qid": 0, 00:19:43.672 "state": "enabled", 00:19:43.672 "thread": "nvmf_tgt_poll_group_000", 00:19:43.672 "listen_address": { 00:19:43.672 "trtype": "TCP", 00:19:43.672 "adrfam": "IPv4", 00:19:43.672 "traddr": "10.0.0.2", 00:19:43.672 "trsvcid": "4420" 00:19:43.672 }, 00:19:43.672 "peer_address": { 00:19:43.672 "trtype": "TCP", 00:19:43.672 "adrfam": "IPv4", 00:19:43.672 "traddr": "10.0.0.1", 00:19:43.672 "trsvcid": "45406" 00:19:43.672 }, 00:19:43.672 "auth": { 00:19:43.672 "state": "completed", 00:19:43.672 "digest": "sha256", 00:19:43.672 "dhgroup": "ffdhe8192" 00:19:43.672 } 00:19:43.672 } 00:19:43.672 ]' 00:19:43.672 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.672 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:43.672 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.672 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:43.672 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.672 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.672 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.672 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.929 08:06:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:Y2UxMWYyYzEwNjJhZTg5NDQxNWFiZDU2OWU2MTNjMzU5ZjlhMjA2ZjU1ZDU2Yjc3INMyEQ==: --dhchap-ctrl-secret DHHC-1:03:MGFiMjAwNDdkNTdjOWQzMjliMmM3NmQwMGNhM2EwYWExYTk4OTZjMDViODJiYWViYTk4M2U3ZmRlMmZlMzM1YVwGEV4=: 00:19:44.863 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.863 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:44.863 08:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.863 08:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.863 08:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.863 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.863 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:44.863 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:45.154 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:19:45.154 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:45.154 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:45.154 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:45.154 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:45.154 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:45.154 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.154 08:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.154 08:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.154 08:06:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.154 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:45.154 08:06:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.089 00:19:46.089 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.089 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.089 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.347 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.347 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.347 08:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.347 08:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.347 08:06:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.347 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.347 { 00:19:46.347 "cntlid": 43, 00:19:46.347 "qid": 0, 00:19:46.347 "state": "enabled", 00:19:46.347 "thread": "nvmf_tgt_poll_group_000", 00:19:46.347 "listen_address": { 00:19:46.347 "trtype": "TCP", 00:19:46.347 "adrfam": "IPv4", 00:19:46.347 "traddr": "10.0.0.2", 00:19:46.347 "trsvcid": "4420" 00:19:46.347 }, 00:19:46.347 "peer_address": { 00:19:46.347 "trtype": "TCP", 00:19:46.347 "adrfam": "IPv4", 00:19:46.347 "traddr": "10.0.0.1", 00:19:46.347 "trsvcid": "45430" 00:19:46.347 }, 00:19:46.347 "auth": { 00:19:46.347 "state": "completed", 00:19:46.347 "digest": "sha256", 00:19:46.347 "dhgroup": "ffdhe8192" 00:19:46.347 } 00:19:46.347 } 00:19:46.347 ]' 00:19:46.347 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.347 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:46.347 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.347 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:46.347 08:06:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.347 08:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.347 08:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.347 08:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.605 08:06:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTQ5NTBhZWUyZWU0Y2I1NGY4YTllZDU3MjkxODJiZjRSO1eA: --dhchap-ctrl-secret DHHC-1:02:OTc1ZjllNWMwZjY1Y2M2ZmEwMGY4NThjNzUyZmFiNDE0YWNmOWFlNWQ0MWZmYThinZpsJw==: 00:19:47.538 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.538 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.538 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:47.538 08:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.538 08:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.538 08:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.538 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.538 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:47.538 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:47.796 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:19:47.796 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.796 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:47.796 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:47.796 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:47.796 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.796 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.796 08:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.796 08:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.796 08:06:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.796 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:47.796 08:06:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.729 00:19:48.729 08:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.729 08:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.729 08:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.986 08:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.986 08:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.986 08:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.986 08:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.986 08:06:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.986 08:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.986 { 00:19:48.986 "cntlid": 45, 00:19:48.986 "qid": 0, 00:19:48.986 "state": "enabled", 00:19:48.986 "thread": "nvmf_tgt_poll_group_000", 00:19:48.986 "listen_address": { 00:19:48.986 "trtype": "TCP", 00:19:48.986 "adrfam": "IPv4", 00:19:48.986 "traddr": "10.0.0.2", 00:19:48.986 "trsvcid": "4420" 00:19:48.986 }, 00:19:48.986 "peer_address": { 00:19:48.986 "trtype": "TCP", 00:19:48.986 "adrfam": "IPv4", 00:19:48.986 "traddr": "10.0.0.1", 00:19:48.986 "trsvcid": "42630" 00:19:48.986 }, 00:19:48.986 "auth": { 00:19:48.986 "state": "completed", 00:19:48.986 "digest": "sha256", 00:19:48.986 "dhgroup": "ffdhe8192" 00:19:48.986 } 00:19:48.986 } 00:19:48.986 ]' 00:19:48.986 08:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.986 08:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:48.986 08:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.243 08:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:49.243 08:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.243 08:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.243 08:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.243 08:06:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.501 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzljOGM2YTIwOGQxOWU0N2U1ZTdhMmNmZWU4NzhmNzE3Yjc2ZjAxZDFkMWFjOGUynRyBfA==: --dhchap-ctrl-secret DHHC-1:01:YTQyNDllZGYwOTA2OTgyMmVkMzYwMjQ3MTIzYjNhNmPyVN1I: 00:19:50.433 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.433 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.433 08:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.433 08:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.433 08:06:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.433 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.433 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:50.433 08:06:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:19:50.691 08:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:19:50.691 08:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.691 08:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:50.691 08:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:50.691 08:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:50.691 08:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.691 08:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:50.691 08:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.691 08:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.691 08:06:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.691 08:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.691 08:06:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.623 00:19:51.623 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.623 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.623 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.880 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.880 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.880 08:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.880 08:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.880 08:06:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.880 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.880 { 00:19:51.880 "cntlid": 47, 00:19:51.880 "qid": 0, 00:19:51.880 "state": "enabled", 00:19:51.880 "thread": "nvmf_tgt_poll_group_000", 00:19:51.880 "listen_address": { 00:19:51.880 "trtype": "TCP", 00:19:51.880 "adrfam": "IPv4", 00:19:51.880 "traddr": "10.0.0.2", 00:19:51.880 "trsvcid": "4420" 00:19:51.880 }, 00:19:51.880 "peer_address": { 00:19:51.880 "trtype": "TCP", 00:19:51.880 "adrfam": "IPv4", 00:19:51.880 "traddr": "10.0.0.1", 00:19:51.880 "trsvcid": "42664" 00:19:51.880 }, 00:19:51.880 "auth": { 00:19:51.880 "state": "completed", 00:19:51.880 "digest": "sha256", 00:19:51.880 "dhgroup": "ffdhe8192" 00:19:51.880 } 00:19:51.880 } 00:19:51.880 ]' 00:19:51.880 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.880 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.880 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.880 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:51.880 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.880 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.880 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.880 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.138 08:06:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkMmM0Njg5YTNlMWNkZjFmY2E5OTA4ZDNlMTkwYzA0MmVhZmUwYTI0MjA3NTBhNjMyNmMyNzQ0MDYxMzk3OGm+Ivs=: 00:19:53.069 08:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.069 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.069 08:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.069 08:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.069 08:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.069 08:06:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.069 08:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:53.069 08:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.069 08:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.069 08:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:53.069 08:06:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:53.326 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:19:53.326 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.326 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:53.326 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:53.326 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:53.326 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.326 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.326 08:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.326 08:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.584 08:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.584 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.584 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:53.841 00:19:53.841 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.841 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.841 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.099 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.099 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.099 08:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.099 08:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.099 08:06:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.099 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.099 { 00:19:54.099 "cntlid": 49, 00:19:54.099 "qid": 0, 00:19:54.099 "state": "enabled", 00:19:54.099 "thread": "nvmf_tgt_poll_group_000", 00:19:54.099 "listen_address": { 00:19:54.099 "trtype": "TCP", 00:19:54.099 "adrfam": "IPv4", 00:19:54.099 "traddr": "10.0.0.2", 00:19:54.099 "trsvcid": "4420" 00:19:54.099 }, 00:19:54.099 "peer_address": { 00:19:54.099 "trtype": "TCP", 00:19:54.099 "adrfam": "IPv4", 00:19:54.099 "traddr": "10.0.0.1", 00:19:54.099 "trsvcid": "42682" 00:19:54.099 }, 00:19:54.099 "auth": { 00:19:54.099 "state": "completed", 00:19:54.099 "digest": "sha384", 00:19:54.099 "dhgroup": "null" 00:19:54.099 } 00:19:54.099 } 00:19:54.099 ]' 00:19:54.099 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.099 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:54.099 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.099 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:54.099 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.099 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.099 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.099 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.357 08:06:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:Y2UxMWYyYzEwNjJhZTg5NDQxNWFiZDU2OWU2MTNjMzU5ZjlhMjA2ZjU1ZDU2Yjc3INMyEQ==: --dhchap-ctrl-secret DHHC-1:03:MGFiMjAwNDdkNTdjOWQzMjliMmM3NmQwMGNhM2EwYWExYTk4OTZjMDViODJiYWViYTk4M2U3ZmRlMmZlMzM1YVwGEV4=: 00:19:55.289 08:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.289 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.289 08:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.289 08:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.289 08:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.289 08:06:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.289 08:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.289 08:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:55.289 08:06:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:55.547 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:55.547 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.547 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:55.547 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:55.547 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:55.547 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.547 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.547 08:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.547 08:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.547 08:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.547 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.547 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.805 00:19:55.805 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.805 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.805 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.062 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.062 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.062 08:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.062 08:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.062 08:06:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.062 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.062 { 00:19:56.062 "cntlid": 51, 00:19:56.062 "qid": 0, 00:19:56.062 "state": "enabled", 00:19:56.062 "thread": "nvmf_tgt_poll_group_000", 00:19:56.062 "listen_address": { 00:19:56.062 "trtype": "TCP", 00:19:56.062 "adrfam": "IPv4", 00:19:56.062 "traddr": "10.0.0.2", 00:19:56.062 "trsvcid": "4420" 00:19:56.062 }, 00:19:56.062 "peer_address": { 00:19:56.062 "trtype": "TCP", 00:19:56.062 "adrfam": "IPv4", 00:19:56.062 "traddr": "10.0.0.1", 00:19:56.062 "trsvcid": "42718" 00:19:56.062 }, 00:19:56.062 "auth": { 00:19:56.062 "state": "completed", 00:19:56.062 "digest": "sha384", 00:19:56.062 "dhgroup": "null" 00:19:56.062 } 00:19:56.062 } 00:19:56.062 ]' 00:19:56.062 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.320 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:56.320 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.320 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:56.320 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.320 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.320 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.320 08:06:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.578 08:06:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTQ5NTBhZWUyZWU0Y2I1NGY4YTllZDU3MjkxODJiZjRSO1eA: --dhchap-ctrl-secret DHHC-1:02:OTc1ZjllNWMwZjY1Y2M2ZmEwMGY4NThjNzUyZmFiNDE0YWNmOWFlNWQ0MWZmYThinZpsJw==: 00:19:57.510 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.510 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.510 08:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.510 08:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.510 08:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.510 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.510 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:57.510 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:57.769 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:57.769 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.769 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:57.769 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:57.769 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:57.769 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.769 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.769 08:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.769 08:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.769 08:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.769 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.769 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:58.032 00:19:58.032 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.032 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.032 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.290 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.290 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.290 08:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.290 08:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.290 08:06:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.290 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.290 { 00:19:58.290 "cntlid": 53, 00:19:58.290 "qid": 0, 00:19:58.290 "state": "enabled", 00:19:58.290 "thread": "nvmf_tgt_poll_group_000", 00:19:58.290 "listen_address": { 00:19:58.290 "trtype": "TCP", 00:19:58.290 "adrfam": "IPv4", 00:19:58.290 "traddr": "10.0.0.2", 00:19:58.290 "trsvcid": "4420" 00:19:58.290 }, 00:19:58.290 "peer_address": { 00:19:58.290 "trtype": "TCP", 00:19:58.290 "adrfam": "IPv4", 00:19:58.290 "traddr": "10.0.0.1", 00:19:58.290 "trsvcid": "37838" 00:19:58.290 }, 00:19:58.290 "auth": { 00:19:58.290 "state": "completed", 00:19:58.290 "digest": "sha384", 00:19:58.290 "dhgroup": "null" 00:19:58.290 } 00:19:58.290 } 00:19:58.290 ]' 00:19:58.290 08:06:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.290 08:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:58.290 08:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.546 08:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:58.546 08:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.546 08:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.546 08:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.546 08:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.803 08:06:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzljOGM2YTIwOGQxOWU0N2U1ZTdhMmNmZWU4NzhmNzE3Yjc2ZjAxZDFkMWFjOGUynRyBfA==: --dhchap-ctrl-secret DHHC-1:01:YTQyNDllZGYwOTA2OTgyMmVkMzYwMjQ3MTIzYjNhNmPyVN1I: 00:19:59.784 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.784 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:59.784 08:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.784 08:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.784 08:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.784 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:59.784 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:59.784 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:00.041 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:00.042 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.042 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:00.042 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:00.042 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:00.042 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.042 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:00.042 08:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.042 08:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.042 08:06:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.042 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.042 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:00.299 00:20:00.299 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.299 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.299 08:06:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.557 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.557 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.557 08:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.557 08:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.557 08:06:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.557 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.557 { 00:20:00.557 "cntlid": 55, 00:20:00.557 "qid": 0, 00:20:00.557 "state": "enabled", 00:20:00.557 "thread": "nvmf_tgt_poll_group_000", 00:20:00.557 "listen_address": { 00:20:00.557 "trtype": "TCP", 00:20:00.557 "adrfam": "IPv4", 00:20:00.557 "traddr": "10.0.0.2", 00:20:00.557 "trsvcid": "4420" 00:20:00.557 }, 00:20:00.557 "peer_address": { 00:20:00.557 "trtype": "TCP", 00:20:00.557 "adrfam": "IPv4", 00:20:00.557 "traddr": "10.0.0.1", 00:20:00.557 "trsvcid": "37858" 00:20:00.557 }, 00:20:00.557 "auth": { 00:20:00.557 "state": "completed", 00:20:00.557 "digest": "sha384", 00:20:00.557 "dhgroup": "null" 00:20:00.557 } 00:20:00.557 } 00:20:00.557 ]' 00:20:00.557 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.557 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:00.557 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.557 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:00.557 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.557 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.814 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.814 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.814 08:06:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkMmM0Njg5YTNlMWNkZjFmY2E5OTA4ZDNlMTkwYzA0MmVhZmUwYTI0MjA3NTBhNjMyNmMyNzQ0MDYxMzk3OGm+Ivs=: 00:20:02.184 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.184 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.184 08:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.184 08:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.184 08:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.184 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:02.184 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.184 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.184 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:02.184 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:02.184 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.184 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:02.184 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:02.184 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:02.184 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.184 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.184 08:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.184 08:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.184 08:06:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.184 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.184 08:06:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:02.442 00:20:02.442 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.442 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.442 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.701 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.701 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.701 08:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.701 08:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.701 08:06:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.701 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.701 { 00:20:02.701 "cntlid": 57, 00:20:02.701 "qid": 0, 00:20:02.701 "state": "enabled", 00:20:02.701 "thread": "nvmf_tgt_poll_group_000", 00:20:02.701 "listen_address": { 00:20:02.701 "trtype": "TCP", 00:20:02.701 "adrfam": "IPv4", 00:20:02.701 "traddr": "10.0.0.2", 00:20:02.701 "trsvcid": "4420" 00:20:02.701 }, 00:20:02.701 "peer_address": { 00:20:02.701 "trtype": "TCP", 00:20:02.701 "adrfam": "IPv4", 00:20:02.701 "traddr": "10.0.0.1", 00:20:02.701 "trsvcid": "37888" 00:20:02.701 }, 00:20:02.701 "auth": { 00:20:02.701 "state": "completed", 00:20:02.701 "digest": "sha384", 00:20:02.701 "dhgroup": "ffdhe2048" 00:20:02.701 } 00:20:02.701 } 00:20:02.701 ]' 00:20:02.701 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.959 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:02.959 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.959 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:02.959 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.959 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.959 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.959 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.217 08:06:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:Y2UxMWYyYzEwNjJhZTg5NDQxNWFiZDU2OWU2MTNjMzU5ZjlhMjA2ZjU1ZDU2Yjc3INMyEQ==: --dhchap-ctrl-secret DHHC-1:03:MGFiMjAwNDdkNTdjOWQzMjliMmM3NmQwMGNhM2EwYWExYTk4OTZjMDViODJiYWViYTk4M2U3ZmRlMmZlMzM1YVwGEV4=: 00:20:04.150 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.150 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.150 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:04.150 08:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.150 08:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.150 08:06:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.150 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:04.150 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.150 08:06:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:04.408 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:04.408 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:04.408 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:04.408 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:04.408 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:04.408 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.408 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.408 08:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.408 08:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.408 08:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.408 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.408 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:04.666 00:20:04.666 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.666 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.666 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.924 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.924 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.924 08:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.924 08:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.924 08:06:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.924 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.924 { 00:20:04.924 "cntlid": 59, 00:20:04.924 "qid": 0, 00:20:04.924 "state": "enabled", 00:20:04.924 "thread": "nvmf_tgt_poll_group_000", 00:20:04.924 "listen_address": { 00:20:04.924 "trtype": "TCP", 00:20:04.924 "adrfam": "IPv4", 00:20:04.924 "traddr": "10.0.0.2", 00:20:04.924 "trsvcid": "4420" 00:20:04.924 }, 00:20:04.924 "peer_address": { 00:20:04.924 "trtype": "TCP", 00:20:04.924 "adrfam": "IPv4", 00:20:04.924 "traddr": "10.0.0.1", 00:20:04.924 "trsvcid": "37906" 00:20:04.924 }, 00:20:04.924 "auth": { 00:20:04.924 "state": "completed", 00:20:04.924 "digest": "sha384", 00:20:04.924 "dhgroup": "ffdhe2048" 00:20:04.924 } 00:20:04.924 } 00:20:04.924 ]' 00:20:04.924 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.924 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:04.924 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.182 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:05.182 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.182 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.182 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.182 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.440 08:06:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTQ5NTBhZWUyZWU0Y2I1NGY4YTllZDU3MjkxODJiZjRSO1eA: --dhchap-ctrl-secret DHHC-1:02:OTc1ZjllNWMwZjY1Y2M2ZmEwMGY4NThjNzUyZmFiNDE0YWNmOWFlNWQ0MWZmYThinZpsJw==: 00:20:06.374 08:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.374 08:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:06.374 08:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.374 08:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.374 08:06:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.374 08:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:06.374 08:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:06.374 08:06:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:06.632 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:06.632 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:06.632 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:06.632 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:06.632 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:06.632 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.632 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.632 08:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.632 08:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.632 08:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.632 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.632 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:06.890 00:20:06.890 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.890 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.890 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.148 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.148 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.148 08:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.148 08:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.148 08:06:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.148 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.148 { 00:20:07.148 "cntlid": 61, 00:20:07.148 "qid": 0, 00:20:07.148 "state": "enabled", 00:20:07.148 "thread": "nvmf_tgt_poll_group_000", 00:20:07.148 "listen_address": { 00:20:07.148 "trtype": "TCP", 00:20:07.148 "adrfam": "IPv4", 00:20:07.148 "traddr": "10.0.0.2", 00:20:07.148 "trsvcid": "4420" 00:20:07.148 }, 00:20:07.148 "peer_address": { 00:20:07.148 "trtype": "TCP", 00:20:07.148 "adrfam": "IPv4", 00:20:07.148 "traddr": "10.0.0.1", 00:20:07.148 "trsvcid": "35260" 00:20:07.148 }, 00:20:07.148 "auth": { 00:20:07.148 "state": "completed", 00:20:07.148 "digest": "sha384", 00:20:07.148 "dhgroup": "ffdhe2048" 00:20:07.148 } 00:20:07.148 } 00:20:07.148 ]' 00:20:07.148 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.148 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:07.148 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.149 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:07.149 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.149 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.149 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.149 08:06:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.407 08:06:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzljOGM2YTIwOGQxOWU0N2U1ZTdhMmNmZWU4NzhmNzE3Yjc2ZjAxZDFkMWFjOGUynRyBfA==: --dhchap-ctrl-secret DHHC-1:01:YTQyNDllZGYwOTA2OTgyMmVkMzYwMjQ3MTIzYjNhNmPyVN1I: 00:20:08.341 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.341 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:08.341 08:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.341 08:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.599 08:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.599 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.599 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:08.599 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:08.599 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:08.599 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.599 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:08.599 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:08.599 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:08.599 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.599 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:08.599 08:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.599 08:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.857 08:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.857 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:08.857 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:09.115 00:20:09.115 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.115 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.115 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.373 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.373 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.373 08:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.373 08:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.373 08:07:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.373 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.373 { 00:20:09.373 "cntlid": 63, 00:20:09.373 "qid": 0, 00:20:09.373 "state": "enabled", 00:20:09.373 "thread": "nvmf_tgt_poll_group_000", 00:20:09.373 "listen_address": { 00:20:09.373 "trtype": "TCP", 00:20:09.373 "adrfam": "IPv4", 00:20:09.373 "traddr": "10.0.0.2", 00:20:09.373 "trsvcid": "4420" 00:20:09.373 }, 00:20:09.373 "peer_address": { 00:20:09.373 "trtype": "TCP", 00:20:09.373 "adrfam": "IPv4", 00:20:09.373 "traddr": "10.0.0.1", 00:20:09.373 "trsvcid": "35290" 00:20:09.373 }, 00:20:09.373 "auth": { 00:20:09.373 "state": "completed", 00:20:09.373 "digest": "sha384", 00:20:09.373 "dhgroup": "ffdhe2048" 00:20:09.373 } 00:20:09.373 } 00:20:09.373 ]' 00:20:09.373 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.373 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:09.373 08:07:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.373 08:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:09.373 08:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.373 08:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.373 08:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.373 08:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.631 08:07:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkMmM0Njg5YTNlMWNkZjFmY2E5OTA4ZDNlMTkwYzA0MmVhZmUwYTI0MjA3NTBhNjMyNmMyNzQ0MDYxMzk3OGm+Ivs=: 00:20:10.602 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.602 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.602 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.602 08:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.602 08:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.602 08:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.602 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:10.602 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.602 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:10.602 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:10.870 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:10.870 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.870 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:10.870 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:10.870 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:10.870 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.870 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.870 08:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.870 08:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.870 08:07:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.870 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:10.870 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:11.433 00:20:11.433 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.433 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.433 08:07:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.433 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.433 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.433 08:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.433 08:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.690 08:07:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.690 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.690 { 00:20:11.690 "cntlid": 65, 00:20:11.690 "qid": 0, 00:20:11.690 "state": "enabled", 00:20:11.691 "thread": "nvmf_tgt_poll_group_000", 00:20:11.691 "listen_address": { 00:20:11.691 "trtype": "TCP", 00:20:11.691 "adrfam": "IPv4", 00:20:11.691 "traddr": "10.0.0.2", 00:20:11.691 "trsvcid": "4420" 00:20:11.691 }, 00:20:11.691 "peer_address": { 00:20:11.691 "trtype": "TCP", 00:20:11.691 "adrfam": "IPv4", 00:20:11.691 "traddr": "10.0.0.1", 00:20:11.691 "trsvcid": "35302" 00:20:11.691 }, 00:20:11.691 "auth": { 00:20:11.691 "state": "completed", 00:20:11.691 "digest": "sha384", 00:20:11.691 "dhgroup": "ffdhe3072" 00:20:11.691 } 00:20:11.691 } 00:20:11.691 ]' 00:20:11.691 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.691 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:11.691 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.691 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:11.691 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.691 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.691 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.691 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.948 08:07:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:Y2UxMWYyYzEwNjJhZTg5NDQxNWFiZDU2OWU2MTNjMzU5ZjlhMjA2ZjU1ZDU2Yjc3INMyEQ==: --dhchap-ctrl-secret DHHC-1:03:MGFiMjAwNDdkNTdjOWQzMjliMmM3NmQwMGNhM2EwYWExYTk4OTZjMDViODJiYWViYTk4M2U3ZmRlMmZlMzM1YVwGEV4=: 00:20:12.881 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.881 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.881 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.881 08:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.881 08:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.881 08:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.881 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.881 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:12.881 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:13.138 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:13.138 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.138 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:13.138 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:13.138 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:13.138 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.138 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.138 08:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.138 08:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.138 08:07:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.138 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.138 08:07:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.395 00:20:13.395 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:13.395 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:13.395 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.653 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.653 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.653 08:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.653 08:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.653 08:07:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.653 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:13.653 { 00:20:13.653 "cntlid": 67, 00:20:13.653 "qid": 0, 00:20:13.653 "state": "enabled", 00:20:13.653 "thread": "nvmf_tgt_poll_group_000", 00:20:13.653 "listen_address": { 00:20:13.653 "trtype": "TCP", 00:20:13.653 "adrfam": "IPv4", 00:20:13.653 "traddr": "10.0.0.2", 00:20:13.653 "trsvcid": "4420" 00:20:13.653 }, 00:20:13.653 "peer_address": { 00:20:13.653 "trtype": "TCP", 00:20:13.653 "adrfam": "IPv4", 00:20:13.653 "traddr": "10.0.0.1", 00:20:13.653 "trsvcid": "35322" 00:20:13.653 }, 00:20:13.653 "auth": { 00:20:13.653 "state": "completed", 00:20:13.653 "digest": "sha384", 00:20:13.653 "dhgroup": "ffdhe3072" 00:20:13.653 } 00:20:13.653 } 00:20:13.653 ]' 00:20:13.653 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:13.910 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:13.910 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:13.910 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:13.910 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:13.910 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.910 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.910 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.167 08:07:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTQ5NTBhZWUyZWU0Y2I1NGY4YTllZDU3MjkxODJiZjRSO1eA: --dhchap-ctrl-secret DHHC-1:02:OTc1ZjllNWMwZjY1Y2M2ZmEwMGY4NThjNzUyZmFiNDE0YWNmOWFlNWQ0MWZmYThinZpsJw==: 00:20:15.099 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.099 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:15.099 08:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.099 08:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.099 08:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.099 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.099 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:15.099 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:15.357 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:15.357 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.357 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:15.357 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:15.357 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:15.357 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.357 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.357 08:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.357 08:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.357 08:07:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.357 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.357 08:07:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.614 00:20:15.614 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:15.614 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.614 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:15.871 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.871 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.871 08:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.871 08:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.871 08:07:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.871 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:15.871 { 00:20:15.871 "cntlid": 69, 00:20:15.871 "qid": 0, 00:20:15.871 "state": "enabled", 00:20:15.871 "thread": "nvmf_tgt_poll_group_000", 00:20:15.871 "listen_address": { 00:20:15.871 "trtype": "TCP", 00:20:15.871 "adrfam": "IPv4", 00:20:15.871 "traddr": "10.0.0.2", 00:20:15.871 "trsvcid": "4420" 00:20:15.871 }, 00:20:15.871 "peer_address": { 00:20:15.871 "trtype": "TCP", 00:20:15.871 "adrfam": "IPv4", 00:20:15.871 "traddr": "10.0.0.1", 00:20:15.871 "trsvcid": "35350" 00:20:15.871 }, 00:20:15.871 "auth": { 00:20:15.871 "state": "completed", 00:20:15.871 "digest": "sha384", 00:20:15.871 "dhgroup": "ffdhe3072" 00:20:15.871 } 00:20:15.871 } 00:20:15.871 ]' 00:20:15.871 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:15.871 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:15.871 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.128 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:16.128 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.128 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.128 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.128 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.385 08:07:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzljOGM2YTIwOGQxOWU0N2U1ZTdhMmNmZWU4NzhmNzE3Yjc2ZjAxZDFkMWFjOGUynRyBfA==: --dhchap-ctrl-secret DHHC-1:01:YTQyNDllZGYwOTA2OTgyMmVkMzYwMjQ3MTIzYjNhNmPyVN1I: 00:20:17.314 08:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.314 08:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:17.314 08:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.314 08:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.314 08:07:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.314 08:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.314 08:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:17.314 08:07:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:17.571 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:17.571 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.571 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:17.571 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:17.571 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:17.571 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.571 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:17.571 08:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.571 08:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.571 08:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.571 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:17.571 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:17.828 00:20:17.828 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.828 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.828 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.085 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.085 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.085 08:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.085 08:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.085 08:07:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.085 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.085 { 00:20:18.085 "cntlid": 71, 00:20:18.085 "qid": 0, 00:20:18.085 "state": "enabled", 00:20:18.085 "thread": "nvmf_tgt_poll_group_000", 00:20:18.085 "listen_address": { 00:20:18.085 "trtype": "TCP", 00:20:18.085 "adrfam": "IPv4", 00:20:18.085 "traddr": "10.0.0.2", 00:20:18.085 "trsvcid": "4420" 00:20:18.085 }, 00:20:18.085 "peer_address": { 00:20:18.085 "trtype": "TCP", 00:20:18.085 "adrfam": "IPv4", 00:20:18.085 "traddr": "10.0.0.1", 00:20:18.085 "trsvcid": "54510" 00:20:18.085 }, 00:20:18.085 "auth": { 00:20:18.085 "state": "completed", 00:20:18.085 "digest": "sha384", 00:20:18.085 "dhgroup": "ffdhe3072" 00:20:18.085 } 00:20:18.085 } 00:20:18.085 ]' 00:20:18.085 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.085 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:18.085 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.342 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:18.342 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.342 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.342 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.342 08:07:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.599 08:07:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkMmM0Njg5YTNlMWNkZjFmY2E5OTA4ZDNlMTkwYzA0MmVhZmUwYTI0MjA3NTBhNjMyNmMyNzQ0MDYxMzk3OGm+Ivs=: 00:20:19.530 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.530 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:19.530 08:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.530 08:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.530 08:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.530 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.530 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:19.530 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:19.530 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:19.787 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:19.787 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.787 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:19.787 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:19.787 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:19.787 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.787 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.787 08:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.787 08:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.787 08:07:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.787 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.787 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.352 00:20:20.352 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.352 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.352 08:07:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.609 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.609 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.609 08:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.609 08:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.609 08:07:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.609 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.609 { 00:20:20.609 "cntlid": 73, 00:20:20.609 "qid": 0, 00:20:20.609 "state": "enabled", 00:20:20.609 "thread": "nvmf_tgt_poll_group_000", 00:20:20.609 "listen_address": { 00:20:20.609 "trtype": "TCP", 00:20:20.609 "adrfam": "IPv4", 00:20:20.609 "traddr": "10.0.0.2", 00:20:20.609 "trsvcid": "4420" 00:20:20.609 }, 00:20:20.609 "peer_address": { 00:20:20.609 "trtype": "TCP", 00:20:20.610 "adrfam": "IPv4", 00:20:20.610 "traddr": "10.0.0.1", 00:20:20.610 "trsvcid": "54530" 00:20:20.610 }, 00:20:20.610 "auth": { 00:20:20.610 "state": "completed", 00:20:20.610 "digest": "sha384", 00:20:20.610 "dhgroup": "ffdhe4096" 00:20:20.610 } 00:20:20.610 } 00:20:20.610 ]' 00:20:20.610 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.610 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:20.610 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.610 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:20.610 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.610 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.610 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.610 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.867 08:07:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:Y2UxMWYyYzEwNjJhZTg5NDQxNWFiZDU2OWU2MTNjMzU5ZjlhMjA2ZjU1ZDU2Yjc3INMyEQ==: --dhchap-ctrl-secret DHHC-1:03:MGFiMjAwNDdkNTdjOWQzMjliMmM3NmQwMGNhM2EwYWExYTk4OTZjMDViODJiYWViYTk4M2U3ZmRlMmZlMzM1YVwGEV4=: 00:20:22.238 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.238 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.238 08:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.238 08:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.238 08:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.238 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:22.238 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:22.238 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:22.238 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:22.238 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:22.238 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:22.238 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:22.238 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:22.238 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.238 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.238 08:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.238 08:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.238 08:07:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.238 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.238 08:07:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.496 00:20:22.496 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:22.496 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:22.496 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.762 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.762 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:22.762 08:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.762 08:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.762 08:07:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.762 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:22.762 { 00:20:22.762 "cntlid": 75, 00:20:22.762 "qid": 0, 00:20:22.762 "state": "enabled", 00:20:22.762 "thread": "nvmf_tgt_poll_group_000", 00:20:22.762 "listen_address": { 00:20:22.762 "trtype": "TCP", 00:20:22.762 "adrfam": "IPv4", 00:20:22.762 "traddr": "10.0.0.2", 00:20:22.762 "trsvcid": "4420" 00:20:22.762 }, 00:20:22.762 "peer_address": { 00:20:22.762 "trtype": "TCP", 00:20:22.762 "adrfam": "IPv4", 00:20:22.762 "traddr": "10.0.0.1", 00:20:22.762 "trsvcid": "54546" 00:20:22.762 }, 00:20:22.762 "auth": { 00:20:22.762 "state": "completed", 00:20:22.762 "digest": "sha384", 00:20:22.762 "dhgroup": "ffdhe4096" 00:20:22.762 } 00:20:22.762 } 00:20:22.762 ]' 00:20:22.762 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.065 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.065 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.065 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:23.065 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:23.065 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.065 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.065 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.322 08:07:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTQ5NTBhZWUyZWU0Y2I1NGY4YTllZDU3MjkxODJiZjRSO1eA: --dhchap-ctrl-secret DHHC-1:02:OTc1ZjllNWMwZjY1Y2M2ZmEwMGY4NThjNzUyZmFiNDE0YWNmOWFlNWQ0MWZmYThinZpsJw==: 00:20:24.254 08:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.254 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.254 08:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.254 08:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.254 08:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.254 08:07:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.254 08:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:24.254 08:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:24.254 08:07:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:24.511 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:24.511 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.511 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:24.511 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:24.511 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:24.511 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.511 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.511 08:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.511 08:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.511 08:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.511 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.511 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:24.769 00:20:24.769 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.769 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.769 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.026 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.026 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.026 08:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.026 08:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.026 08:07:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.026 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.026 { 00:20:25.026 "cntlid": 77, 00:20:25.026 "qid": 0, 00:20:25.026 "state": "enabled", 00:20:25.026 "thread": "nvmf_tgt_poll_group_000", 00:20:25.026 "listen_address": { 00:20:25.026 "trtype": "TCP", 00:20:25.026 "adrfam": "IPv4", 00:20:25.026 "traddr": "10.0.0.2", 00:20:25.026 "trsvcid": "4420" 00:20:25.026 }, 00:20:25.026 "peer_address": { 00:20:25.026 "trtype": "TCP", 00:20:25.026 "adrfam": "IPv4", 00:20:25.026 "traddr": "10.0.0.1", 00:20:25.026 "trsvcid": "54576" 00:20:25.026 }, 00:20:25.026 "auth": { 00:20:25.026 "state": "completed", 00:20:25.026 "digest": "sha384", 00:20:25.026 "dhgroup": "ffdhe4096" 00:20:25.026 } 00:20:25.026 } 00:20:25.026 ]' 00:20:25.026 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.283 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:25.283 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.283 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:25.283 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.283 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.284 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.284 08:07:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.541 08:07:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzljOGM2YTIwOGQxOWU0N2U1ZTdhMmNmZWU4NzhmNzE3Yjc2ZjAxZDFkMWFjOGUynRyBfA==: --dhchap-ctrl-secret DHHC-1:01:YTQyNDllZGYwOTA2OTgyMmVkMzYwMjQ3MTIzYjNhNmPyVN1I: 00:20:26.473 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.473 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.473 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.473 08:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.473 08:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.473 08:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.473 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:26.473 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:26.473 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:26.730 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:26.730 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:26.730 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:26.730 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:26.730 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:26.730 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:26.730 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:26.730 08:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.730 08:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.730 08:07:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.730 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.730 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:27.295 00:20:27.295 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:27.295 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:27.295 08:07:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.552 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.552 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.552 08:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.552 08:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.552 08:07:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.552 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:27.552 { 00:20:27.552 "cntlid": 79, 00:20:27.552 "qid": 0, 00:20:27.552 "state": "enabled", 00:20:27.552 "thread": "nvmf_tgt_poll_group_000", 00:20:27.552 "listen_address": { 00:20:27.552 "trtype": "TCP", 00:20:27.552 "adrfam": "IPv4", 00:20:27.552 "traddr": "10.0.0.2", 00:20:27.552 "trsvcid": "4420" 00:20:27.552 }, 00:20:27.552 "peer_address": { 00:20:27.552 "trtype": "TCP", 00:20:27.552 "adrfam": "IPv4", 00:20:27.552 "traddr": "10.0.0.1", 00:20:27.552 "trsvcid": "45312" 00:20:27.552 }, 00:20:27.552 "auth": { 00:20:27.552 "state": "completed", 00:20:27.552 "digest": "sha384", 00:20:27.552 "dhgroup": "ffdhe4096" 00:20:27.552 } 00:20:27.552 } 00:20:27.552 ]' 00:20:27.552 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:27.552 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:27.552 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:27.552 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:27.552 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:27.552 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.552 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.552 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.810 08:07:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkMmM0Njg5YTNlMWNkZjFmY2E5OTA4ZDNlMTkwYzA0MmVhZmUwYTI0MjA3NTBhNjMyNmMyNzQ0MDYxMzk3OGm+Ivs=: 00:20:28.742 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.742 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:28.742 08:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.742 08:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.742 08:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.742 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:28.742 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:28.742 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:28.742 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:29.000 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:29.000 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:29.000 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:29.000 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:29.000 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:29.000 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.000 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.000 08:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.000 08:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.000 08:07:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.000 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.000 08:07:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:29.564 00:20:29.564 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:29.564 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:29.564 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.821 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.821 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.821 08:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.821 08:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.821 08:07:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.821 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:29.821 { 00:20:29.821 "cntlid": 81, 00:20:29.821 "qid": 0, 00:20:29.821 "state": "enabled", 00:20:29.821 "thread": "nvmf_tgt_poll_group_000", 00:20:29.821 "listen_address": { 00:20:29.821 "trtype": "TCP", 00:20:29.821 "adrfam": "IPv4", 00:20:29.821 "traddr": "10.0.0.2", 00:20:29.821 "trsvcid": "4420" 00:20:29.821 }, 00:20:29.821 "peer_address": { 00:20:29.821 "trtype": "TCP", 00:20:29.821 "adrfam": "IPv4", 00:20:29.821 "traddr": "10.0.0.1", 00:20:29.821 "trsvcid": "45332" 00:20:29.821 }, 00:20:29.821 "auth": { 00:20:29.821 "state": "completed", 00:20:29.821 "digest": "sha384", 00:20:29.821 "dhgroup": "ffdhe6144" 00:20:29.821 } 00:20:29.821 } 00:20:29.821 ]' 00:20:29.821 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.079 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.079 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.079 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:30.079 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.079 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.079 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.079 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.335 08:07:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:Y2UxMWYyYzEwNjJhZTg5NDQxNWFiZDU2OWU2MTNjMzU5ZjlhMjA2ZjU1ZDU2Yjc3INMyEQ==: --dhchap-ctrl-secret DHHC-1:03:MGFiMjAwNDdkNTdjOWQzMjliMmM3NmQwMGNhM2EwYWExYTk4OTZjMDViODJiYWViYTk4M2U3ZmRlMmZlMzM1YVwGEV4=: 00:20:31.266 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.266 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.266 08:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.266 08:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.266 08:07:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.266 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:31.266 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.266 08:07:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:31.524 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:31.524 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:31.524 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:31.524 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:31.524 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:31.524 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.524 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.524 08:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:31.524 08:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.524 08:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:31.524 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:31.524 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.088 00:20:32.088 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.088 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.088 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.345 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.345 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.345 08:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.345 08:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.345 08:07:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.345 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.345 { 00:20:32.345 "cntlid": 83, 00:20:32.345 "qid": 0, 00:20:32.345 "state": "enabled", 00:20:32.345 "thread": "nvmf_tgt_poll_group_000", 00:20:32.345 "listen_address": { 00:20:32.345 "trtype": "TCP", 00:20:32.345 "adrfam": "IPv4", 00:20:32.345 "traddr": "10.0.0.2", 00:20:32.345 "trsvcid": "4420" 00:20:32.345 }, 00:20:32.345 "peer_address": { 00:20:32.345 "trtype": "TCP", 00:20:32.345 "adrfam": "IPv4", 00:20:32.345 "traddr": "10.0.0.1", 00:20:32.345 "trsvcid": "45366" 00:20:32.345 }, 00:20:32.345 "auth": { 00:20:32.345 "state": "completed", 00:20:32.345 "digest": "sha384", 00:20:32.345 "dhgroup": "ffdhe6144" 00:20:32.345 } 00:20:32.345 } 00:20:32.345 ]' 00:20:32.345 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.345 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.345 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:32.345 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:32.345 08:07:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:32.345 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.345 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.345 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.601 08:07:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTQ5NTBhZWUyZWU0Y2I1NGY4YTllZDU3MjkxODJiZjRSO1eA: --dhchap-ctrl-secret DHHC-1:02:OTc1ZjllNWMwZjY1Y2M2ZmEwMGY4NThjNzUyZmFiNDE0YWNmOWFlNWQ0MWZmYThinZpsJw==: 00:20:33.532 08:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.532 08:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.532 08:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.532 08:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.532 08:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:33.532 08:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:33.532 08:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.532 08:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:33.789 08:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:33.789 08:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:33.789 08:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:33.789 08:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:33.789 08:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:33.789 08:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.789 08:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.789 08:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:33.789 08:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.047 08:07:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.047 08:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.047 08:07:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.304 00:20:34.562 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.562 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:34.562 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.562 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.562 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.562 08:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.562 08:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.819 08:07:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.819 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:34.819 { 00:20:34.819 "cntlid": 85, 00:20:34.819 "qid": 0, 00:20:34.819 "state": "enabled", 00:20:34.819 "thread": "nvmf_tgt_poll_group_000", 00:20:34.819 "listen_address": { 00:20:34.819 "trtype": "TCP", 00:20:34.819 "adrfam": "IPv4", 00:20:34.819 "traddr": "10.0.0.2", 00:20:34.819 "trsvcid": "4420" 00:20:34.819 }, 00:20:34.819 "peer_address": { 00:20:34.819 "trtype": "TCP", 00:20:34.819 "adrfam": "IPv4", 00:20:34.819 "traddr": "10.0.0.1", 00:20:34.819 "trsvcid": "45392" 00:20:34.819 }, 00:20:34.819 "auth": { 00:20:34.819 "state": "completed", 00:20:34.819 "digest": "sha384", 00:20:34.819 "dhgroup": "ffdhe6144" 00:20:34.819 } 00:20:34.819 } 00:20:34.819 ]' 00:20:34.819 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:34.819 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:34.819 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:34.819 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:34.819 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:34.819 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.819 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.819 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.076 08:07:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzljOGM2YTIwOGQxOWU0N2U1ZTdhMmNmZWU4NzhmNzE3Yjc2ZjAxZDFkMWFjOGUynRyBfA==: --dhchap-ctrl-secret DHHC-1:01:YTQyNDllZGYwOTA2OTgyMmVkMzYwMjQ3MTIzYjNhNmPyVN1I: 00:20:36.048 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.048 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.048 08:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.048 08:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.048 08:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.048 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.048 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:36.048 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:36.305 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:20:36.305 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.305 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:36.305 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:36.305 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:36.305 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.305 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:36.305 08:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.305 08:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.305 08:07:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.305 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.305 08:07:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:36.869 00:20:36.869 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.869 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:36.869 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.126 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.126 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.126 08:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.126 08:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.126 08:07:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.126 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.126 { 00:20:37.126 "cntlid": 87, 00:20:37.126 "qid": 0, 00:20:37.126 "state": "enabled", 00:20:37.126 "thread": "nvmf_tgt_poll_group_000", 00:20:37.126 "listen_address": { 00:20:37.126 "trtype": "TCP", 00:20:37.126 "adrfam": "IPv4", 00:20:37.126 "traddr": "10.0.0.2", 00:20:37.126 "trsvcid": "4420" 00:20:37.126 }, 00:20:37.126 "peer_address": { 00:20:37.126 "trtype": "TCP", 00:20:37.126 "adrfam": "IPv4", 00:20:37.126 "traddr": "10.0.0.1", 00:20:37.126 "trsvcid": "38666" 00:20:37.126 }, 00:20:37.126 "auth": { 00:20:37.126 "state": "completed", 00:20:37.126 "digest": "sha384", 00:20:37.126 "dhgroup": "ffdhe6144" 00:20:37.126 } 00:20:37.126 } 00:20:37.126 ]' 00:20:37.126 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.126 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.126 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.126 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:37.126 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.126 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.126 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.126 08:07:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.384 08:07:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkMmM0Njg5YTNlMWNkZjFmY2E5OTA4ZDNlMTkwYzA0MmVhZmUwYTI0MjA3NTBhNjMyNmMyNzQ0MDYxMzk3OGm+Ivs=: 00:20:38.317 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.317 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.317 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.317 08:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.317 08:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.317 08:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.317 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:38.317 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.317 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:38.317 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:38.575 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:20:38.575 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.575 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.575 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:38.575 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:38.575 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.575 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.575 08:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.575 08:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.575 08:07:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.575 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:38.575 08:07:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.506 00:20:39.506 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.506 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.506 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.763 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.763 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.763 08:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.763 08:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.763 08:07:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.763 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.763 { 00:20:39.763 "cntlid": 89, 00:20:39.763 "qid": 0, 00:20:39.763 "state": "enabled", 00:20:39.763 "thread": "nvmf_tgt_poll_group_000", 00:20:39.763 "listen_address": { 00:20:39.763 "trtype": "TCP", 00:20:39.763 "adrfam": "IPv4", 00:20:39.763 "traddr": "10.0.0.2", 00:20:39.763 "trsvcid": "4420" 00:20:39.763 }, 00:20:39.763 "peer_address": { 00:20:39.763 "trtype": "TCP", 00:20:39.763 "adrfam": "IPv4", 00:20:39.763 "traddr": "10.0.0.1", 00:20:39.763 "trsvcid": "38680" 00:20:39.763 }, 00:20:39.763 "auth": { 00:20:39.763 "state": "completed", 00:20:39.763 "digest": "sha384", 00:20:39.763 "dhgroup": "ffdhe8192" 00:20:39.763 } 00:20:39.763 } 00:20:39.763 ]' 00:20:39.763 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.763 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.763 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.763 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.763 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:40.021 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.021 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.021 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.279 08:07:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:Y2UxMWYyYzEwNjJhZTg5NDQxNWFiZDU2OWU2MTNjMzU5ZjlhMjA2ZjU1ZDU2Yjc3INMyEQ==: --dhchap-ctrl-secret DHHC-1:03:MGFiMjAwNDdkNTdjOWQzMjliMmM3NmQwMGNhM2EwYWExYTk4OTZjMDViODJiYWViYTk4M2U3ZmRlMmZlMzM1YVwGEV4=: 00:20:41.211 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.211 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:41.211 08:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.211 08:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.211 08:07:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.211 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:41.211 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:41.211 08:07:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:41.467 08:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:20:41.467 08:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.467 08:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:41.467 08:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:41.467 08:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:41.467 08:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.467 08:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.467 08:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.467 08:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.467 08:07:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.467 08:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.467 08:07:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:42.399 00:20:42.399 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:42.399 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:42.399 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.657 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.657 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.657 08:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:42.657 08:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.657 08:07:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:42.657 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:42.657 { 00:20:42.657 "cntlid": 91, 00:20:42.657 "qid": 0, 00:20:42.657 "state": "enabled", 00:20:42.657 "thread": "nvmf_tgt_poll_group_000", 00:20:42.657 "listen_address": { 00:20:42.657 "trtype": "TCP", 00:20:42.657 "adrfam": "IPv4", 00:20:42.657 "traddr": "10.0.0.2", 00:20:42.657 "trsvcid": "4420" 00:20:42.657 }, 00:20:42.657 "peer_address": { 00:20:42.657 "trtype": "TCP", 00:20:42.657 "adrfam": "IPv4", 00:20:42.657 "traddr": "10.0.0.1", 00:20:42.657 "trsvcid": "38706" 00:20:42.657 }, 00:20:42.657 "auth": { 00:20:42.657 "state": "completed", 00:20:42.657 "digest": "sha384", 00:20:42.657 "dhgroup": "ffdhe8192" 00:20:42.657 } 00:20:42.657 } 00:20:42.657 ]' 00:20:42.657 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:42.657 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:42.657 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:42.657 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:42.657 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:42.657 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.657 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.657 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.222 08:07:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTQ5NTBhZWUyZWU0Y2I1NGY4YTllZDU3MjkxODJiZjRSO1eA: --dhchap-ctrl-secret DHHC-1:02:OTc1ZjllNWMwZjY1Y2M2ZmEwMGY4NThjNzUyZmFiNDE0YWNmOWFlNWQ0MWZmYThinZpsJw==: 00:20:44.154 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.154 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.154 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.154 08:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.154 08:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.154 08:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.154 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:44.154 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:44.154 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:44.412 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:20:44.412 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:44.412 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:44.412 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:44.412 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:44.412 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.412 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.412 08:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.412 08:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.412 08:07:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.412 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:44.412 08:07:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:45.345 00:20:45.345 08:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:45.345 08:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:45.345 08:07:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.604 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.604 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.604 08:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.604 08:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.604 08:07:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.604 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:45.604 { 00:20:45.604 "cntlid": 93, 00:20:45.604 "qid": 0, 00:20:45.604 "state": "enabled", 00:20:45.604 "thread": "nvmf_tgt_poll_group_000", 00:20:45.604 "listen_address": { 00:20:45.604 "trtype": "TCP", 00:20:45.604 "adrfam": "IPv4", 00:20:45.604 "traddr": "10.0.0.2", 00:20:45.604 "trsvcid": "4420" 00:20:45.604 }, 00:20:45.604 "peer_address": { 00:20:45.604 "trtype": "TCP", 00:20:45.604 "adrfam": "IPv4", 00:20:45.604 "traddr": "10.0.0.1", 00:20:45.604 "trsvcid": "38738" 00:20:45.604 }, 00:20:45.604 "auth": { 00:20:45.604 "state": "completed", 00:20:45.604 "digest": "sha384", 00:20:45.604 "dhgroup": "ffdhe8192" 00:20:45.604 } 00:20:45.604 } 00:20:45.604 ]' 00:20:45.604 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:45.604 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:45.604 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:45.604 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:45.604 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:45.604 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.604 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.604 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.861 08:07:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzljOGM2YTIwOGQxOWU0N2U1ZTdhMmNmZWU4NzhmNzE3Yjc2ZjAxZDFkMWFjOGUynRyBfA==: --dhchap-ctrl-secret DHHC-1:01:YTQyNDllZGYwOTA2OTgyMmVkMzYwMjQ3MTIzYjNhNmPyVN1I: 00:20:46.793 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.793 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.793 08:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.793 08:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.793 08:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.793 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:46.793 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:46.793 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:20:47.051 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:20:47.051 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:47.051 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:47.051 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:47.051 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:47.051 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.051 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:47.051 08:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.051 08:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.051 08:07:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.051 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.051 08:07:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:47.982 00:20:47.982 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:47.982 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:47.982 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.240 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.240 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.240 08:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.240 08:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.240 08:07:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.240 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.240 { 00:20:48.240 "cntlid": 95, 00:20:48.240 "qid": 0, 00:20:48.240 "state": "enabled", 00:20:48.240 "thread": "nvmf_tgt_poll_group_000", 00:20:48.240 "listen_address": { 00:20:48.240 "trtype": "TCP", 00:20:48.240 "adrfam": "IPv4", 00:20:48.240 "traddr": "10.0.0.2", 00:20:48.240 "trsvcid": "4420" 00:20:48.240 }, 00:20:48.240 "peer_address": { 00:20:48.240 "trtype": "TCP", 00:20:48.240 "adrfam": "IPv4", 00:20:48.240 "traddr": "10.0.0.1", 00:20:48.240 "trsvcid": "33494" 00:20:48.240 }, 00:20:48.240 "auth": { 00:20:48.240 "state": "completed", 00:20:48.240 "digest": "sha384", 00:20:48.240 "dhgroup": "ffdhe8192" 00:20:48.240 } 00:20:48.240 } 00:20:48.240 ]' 00:20:48.240 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.240 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.240 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.240 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:48.240 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.240 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.240 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.240 08:07:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.499 08:07:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkMmM0Njg5YTNlMWNkZjFmY2E5OTA4ZDNlMTkwYzA0MmVhZmUwYTI0MjA3NTBhNjMyNmMyNzQ0MDYxMzk3OGm+Ivs=: 00:20:49.430 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.430 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.430 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.430 08:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.430 08:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.430 08:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.430 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:49.430 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:49.430 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:49.430 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:49.430 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:49.688 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:20:49.688 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:49.688 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:49.688 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:49.688 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:49.688 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.688 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.688 08:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.688 08:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.688 08:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:49.688 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:49.688 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.253 00:20:50.253 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.253 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.253 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.253 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.253 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.253 08:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.253 08:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.253 08:07:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.253 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.253 { 00:20:50.253 "cntlid": 97, 00:20:50.253 "qid": 0, 00:20:50.253 "state": "enabled", 00:20:50.253 "thread": "nvmf_tgt_poll_group_000", 00:20:50.253 "listen_address": { 00:20:50.253 "trtype": "TCP", 00:20:50.253 "adrfam": "IPv4", 00:20:50.253 "traddr": "10.0.0.2", 00:20:50.253 "trsvcid": "4420" 00:20:50.253 }, 00:20:50.253 "peer_address": { 00:20:50.253 "trtype": "TCP", 00:20:50.253 "adrfam": "IPv4", 00:20:50.253 "traddr": "10.0.0.1", 00:20:50.253 "trsvcid": "33514" 00:20:50.253 }, 00:20:50.253 "auth": { 00:20:50.253 "state": "completed", 00:20:50.253 "digest": "sha512", 00:20:50.253 "dhgroup": "null" 00:20:50.253 } 00:20:50.253 } 00:20:50.253 ]' 00:20:50.253 08:07:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.510 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:50.510 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.510 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:50.510 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:50.510 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.510 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.510 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.768 08:07:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:Y2UxMWYyYzEwNjJhZTg5NDQxNWFiZDU2OWU2MTNjMzU5ZjlhMjA2ZjU1ZDU2Yjc3INMyEQ==: --dhchap-ctrl-secret DHHC-1:03:MGFiMjAwNDdkNTdjOWQzMjliMmM3NmQwMGNhM2EwYWExYTk4OTZjMDViODJiYWViYTk4M2U3ZmRlMmZlMzM1YVwGEV4=: 00:20:51.721 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.721 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.721 08:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.721 08:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.721 08:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.721 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:51.721 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:51.721 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:51.979 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:20:51.979 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:51.979 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:51.979 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:51.979 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:51.979 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.979 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.979 08:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:51.979 08:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.979 08:07:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:51.979 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:51.979 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.236 00:20:52.236 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.236 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.236 08:07:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.494 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.494 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.494 08:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.494 08:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.494 08:07:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.494 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:52.494 { 00:20:52.494 "cntlid": 99, 00:20:52.494 "qid": 0, 00:20:52.494 "state": "enabled", 00:20:52.494 "thread": "nvmf_tgt_poll_group_000", 00:20:52.494 "listen_address": { 00:20:52.494 "trtype": "TCP", 00:20:52.494 "adrfam": "IPv4", 00:20:52.494 "traddr": "10.0.0.2", 00:20:52.494 "trsvcid": "4420" 00:20:52.494 }, 00:20:52.494 "peer_address": { 00:20:52.494 "trtype": "TCP", 00:20:52.494 "adrfam": "IPv4", 00:20:52.494 "traddr": "10.0.0.1", 00:20:52.494 "trsvcid": "33534" 00:20:52.494 }, 00:20:52.494 "auth": { 00:20:52.494 "state": "completed", 00:20:52.494 "digest": "sha512", 00:20:52.494 "dhgroup": "null" 00:20:52.494 } 00:20:52.494 } 00:20:52.494 ]' 00:20:52.494 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:52.752 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:52.752 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:52.752 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:52.752 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:52.752 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.752 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.752 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.010 08:07:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTQ5NTBhZWUyZWU0Y2I1NGY4YTllZDU3MjkxODJiZjRSO1eA: --dhchap-ctrl-secret DHHC-1:02:OTc1ZjllNWMwZjY1Y2M2ZmEwMGY4NThjNzUyZmFiNDE0YWNmOWFlNWQ0MWZmYThinZpsJw==: 00:20:53.942 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.942 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.942 08:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.942 08:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.942 08:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.942 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:53.942 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:53.942 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:54.200 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:20:54.200 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.200 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:54.200 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:54.200 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:54.200 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.200 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.200 08:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.201 08:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.458 08:07:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.458 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.458 08:07:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.715 00:20:54.716 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:54.716 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.716 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:54.973 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.973 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.973 08:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.973 08:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.973 08:07:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.973 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:54.973 { 00:20:54.973 "cntlid": 101, 00:20:54.973 "qid": 0, 00:20:54.973 "state": "enabled", 00:20:54.973 "thread": "nvmf_tgt_poll_group_000", 00:20:54.973 "listen_address": { 00:20:54.973 "trtype": "TCP", 00:20:54.973 "adrfam": "IPv4", 00:20:54.973 "traddr": "10.0.0.2", 00:20:54.973 "trsvcid": "4420" 00:20:54.973 }, 00:20:54.973 "peer_address": { 00:20:54.973 "trtype": "TCP", 00:20:54.973 "adrfam": "IPv4", 00:20:54.973 "traddr": "10.0.0.1", 00:20:54.973 "trsvcid": "33562" 00:20:54.973 }, 00:20:54.973 "auth": { 00:20:54.973 "state": "completed", 00:20:54.973 "digest": "sha512", 00:20:54.973 "dhgroup": "null" 00:20:54.973 } 00:20:54.973 } 00:20:54.973 ]' 00:20:54.973 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:54.973 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:54.973 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:54.973 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:54.973 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:54.973 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.973 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.973 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.230 08:07:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzljOGM2YTIwOGQxOWU0N2U1ZTdhMmNmZWU4NzhmNzE3Yjc2ZjAxZDFkMWFjOGUynRyBfA==: --dhchap-ctrl-secret DHHC-1:01:YTQyNDllZGYwOTA2OTgyMmVkMzYwMjQ3MTIzYjNhNmPyVN1I: 00:20:56.162 08:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.162 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.162 08:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.162 08:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.162 08:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.162 08:07:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.162 08:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.162 08:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:56.162 08:07:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:20:56.419 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:20:56.419 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:56.419 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:56.419 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:56.419 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:56.419 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.419 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:56.419 08:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.419 08:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.419 08:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.419 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.419 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:56.984 00:20:56.984 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:56.984 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:56.984 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.984 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.984 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.984 08:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.984 08:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.984 08:07:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.984 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:56.984 { 00:20:56.984 "cntlid": 103, 00:20:56.984 "qid": 0, 00:20:56.984 "state": "enabled", 00:20:56.984 "thread": "nvmf_tgt_poll_group_000", 00:20:56.984 "listen_address": { 00:20:56.984 "trtype": "TCP", 00:20:56.984 "adrfam": "IPv4", 00:20:56.984 "traddr": "10.0.0.2", 00:20:56.984 "trsvcid": "4420" 00:20:56.984 }, 00:20:56.984 "peer_address": { 00:20:56.984 "trtype": "TCP", 00:20:56.984 "adrfam": "IPv4", 00:20:56.984 "traddr": "10.0.0.1", 00:20:56.984 "trsvcid": "47296" 00:20:56.984 }, 00:20:56.984 "auth": { 00:20:56.984 "state": "completed", 00:20:56.984 "digest": "sha512", 00:20:56.984 "dhgroup": "null" 00:20:56.984 } 00:20:56.984 } 00:20:56.984 ]' 00:20:56.984 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:57.241 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:57.241 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:57.241 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:57.241 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:57.241 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.241 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.241 08:07:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.497 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkMmM0Njg5YTNlMWNkZjFmY2E5OTA4ZDNlMTkwYzA0MmVhZmUwYTI0MjA3NTBhNjMyNmMyNzQ0MDYxMzk3OGm+Ivs=: 00:20:58.428 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.428 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.428 08:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.428 08:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.428 08:07:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.428 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.428 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:58.428 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:58.428 08:07:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:58.685 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:20:58.685 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:58.685 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:58.685 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:58.685 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:58.685 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.685 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.685 08:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.685 08:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.685 08:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.685 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.685 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.942 00:20:58.942 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:58.943 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.943 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:59.201 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.201 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.201 08:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.201 08:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.201 08:07:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.201 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:59.201 { 00:20:59.201 "cntlid": 105, 00:20:59.201 "qid": 0, 00:20:59.201 "state": "enabled", 00:20:59.201 "thread": "nvmf_tgt_poll_group_000", 00:20:59.201 "listen_address": { 00:20:59.201 "trtype": "TCP", 00:20:59.201 "adrfam": "IPv4", 00:20:59.201 "traddr": "10.0.0.2", 00:20:59.201 "trsvcid": "4420" 00:20:59.201 }, 00:20:59.201 "peer_address": { 00:20:59.201 "trtype": "TCP", 00:20:59.201 "adrfam": "IPv4", 00:20:59.201 "traddr": "10.0.0.1", 00:20:59.201 "trsvcid": "47334" 00:20:59.201 }, 00:20:59.201 "auth": { 00:20:59.201 "state": "completed", 00:20:59.201 "digest": "sha512", 00:20:59.201 "dhgroup": "ffdhe2048" 00:20:59.201 } 00:20:59.201 } 00:20:59.201 ]' 00:20:59.201 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:59.201 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:59.201 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:59.201 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:59.201 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:59.459 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.459 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.459 08:07:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.716 08:07:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:Y2UxMWYyYzEwNjJhZTg5NDQxNWFiZDU2OWU2MTNjMzU5ZjlhMjA2ZjU1ZDU2Yjc3INMyEQ==: --dhchap-ctrl-secret DHHC-1:03:MGFiMjAwNDdkNTdjOWQzMjliMmM3NmQwMGNhM2EwYWExYTk4OTZjMDViODJiYWViYTk4M2U3ZmRlMmZlMzM1YVwGEV4=: 00:21:00.647 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.647 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.647 08:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.647 08:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.647 08:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.647 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:00.647 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:00.647 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:00.924 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:00.924 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:00.924 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:00.924 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:00.924 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:00.924 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.924 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.924 08:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.924 08:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.924 08:07:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.924 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.924 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:01.194 00:21:01.194 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:01.194 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:01.194 08:07:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.452 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.452 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.452 08:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.452 08:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.452 08:07:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.452 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:01.452 { 00:21:01.452 "cntlid": 107, 00:21:01.452 "qid": 0, 00:21:01.452 "state": "enabled", 00:21:01.452 "thread": "nvmf_tgt_poll_group_000", 00:21:01.452 "listen_address": { 00:21:01.452 "trtype": "TCP", 00:21:01.452 "adrfam": "IPv4", 00:21:01.452 "traddr": "10.0.0.2", 00:21:01.452 "trsvcid": "4420" 00:21:01.452 }, 00:21:01.452 "peer_address": { 00:21:01.452 "trtype": "TCP", 00:21:01.452 "adrfam": "IPv4", 00:21:01.452 "traddr": "10.0.0.1", 00:21:01.452 "trsvcid": "47370" 00:21:01.452 }, 00:21:01.452 "auth": { 00:21:01.452 "state": "completed", 00:21:01.452 "digest": "sha512", 00:21:01.452 "dhgroup": "ffdhe2048" 00:21:01.452 } 00:21:01.452 } 00:21:01.452 ]' 00:21:01.452 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:01.452 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:01.452 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:01.452 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:01.452 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:01.710 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.710 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.710 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.968 08:07:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTQ5NTBhZWUyZWU0Y2I1NGY4YTllZDU3MjkxODJiZjRSO1eA: --dhchap-ctrl-secret DHHC-1:02:OTc1ZjllNWMwZjY1Y2M2ZmEwMGY4NThjNzUyZmFiNDE0YWNmOWFlNWQ0MWZmYThinZpsJw==: 00:21:02.902 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:02.902 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:02.902 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.902 08:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.902 08:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.902 08:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.902 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:02.902 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:02.902 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:03.160 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:03.160 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:03.160 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:03.160 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:03.160 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:03.160 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.160 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.160 08:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.160 08:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.160 08:07:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.160 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.160 08:07:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:03.417 00:21:03.417 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:03.417 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:03.417 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.674 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.675 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.675 08:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:03.675 08:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.675 08:07:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:03.675 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:03.675 { 00:21:03.675 "cntlid": 109, 00:21:03.675 "qid": 0, 00:21:03.675 "state": "enabled", 00:21:03.675 "thread": "nvmf_tgt_poll_group_000", 00:21:03.675 "listen_address": { 00:21:03.675 "trtype": "TCP", 00:21:03.675 "adrfam": "IPv4", 00:21:03.675 "traddr": "10.0.0.2", 00:21:03.675 "trsvcid": "4420" 00:21:03.675 }, 00:21:03.675 "peer_address": { 00:21:03.675 "trtype": "TCP", 00:21:03.675 "adrfam": "IPv4", 00:21:03.675 "traddr": "10.0.0.1", 00:21:03.675 "trsvcid": "47406" 00:21:03.675 }, 00:21:03.675 "auth": { 00:21:03.675 "state": "completed", 00:21:03.675 "digest": "sha512", 00:21:03.675 "dhgroup": "ffdhe2048" 00:21:03.675 } 00:21:03.675 } 00:21:03.675 ]' 00:21:03.675 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.932 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:03.932 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.932 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:03.932 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.932 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.932 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.932 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.190 08:07:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzljOGM2YTIwOGQxOWU0N2U1ZTdhMmNmZWU4NzhmNzE3Yjc2ZjAxZDFkMWFjOGUynRyBfA==: --dhchap-ctrl-secret DHHC-1:01:YTQyNDllZGYwOTA2OTgyMmVkMzYwMjQ3MTIzYjNhNmPyVN1I: 00:21:05.125 08:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.125 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.125 08:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:05.125 08:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.125 08:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.125 08:07:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.125 08:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:05.125 08:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.125 08:07:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:05.383 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:05.383 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:05.383 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:05.383 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:05.383 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:05.383 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.383 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:05.383 08:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.383 08:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.383 08:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.384 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.384 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:05.641 00:21:05.641 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.641 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.641 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.899 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.899 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.899 08:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.899 08:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.899 08:07:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.899 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.899 { 00:21:05.899 "cntlid": 111, 00:21:05.899 "qid": 0, 00:21:05.899 "state": "enabled", 00:21:05.899 "thread": "nvmf_tgt_poll_group_000", 00:21:05.899 "listen_address": { 00:21:05.899 "trtype": "TCP", 00:21:05.899 "adrfam": "IPv4", 00:21:05.899 "traddr": "10.0.0.2", 00:21:05.899 "trsvcid": "4420" 00:21:05.899 }, 00:21:05.899 "peer_address": { 00:21:05.899 "trtype": "TCP", 00:21:05.899 "adrfam": "IPv4", 00:21:05.899 "traddr": "10.0.0.1", 00:21:05.899 "trsvcid": "47440" 00:21:05.899 }, 00:21:05.899 "auth": { 00:21:05.899 "state": "completed", 00:21:05.899 "digest": "sha512", 00:21:05.899 "dhgroup": "ffdhe2048" 00:21:05.899 } 00:21:05.899 } 00:21:05.899 ]' 00:21:05.899 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:06.158 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:06.158 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:06.158 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:06.158 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:06.158 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.158 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.158 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.416 08:07:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkMmM0Njg5YTNlMWNkZjFmY2E5OTA4ZDNlMTkwYzA0MmVhZmUwYTI0MjA3NTBhNjMyNmMyNzQ0MDYxMzk3OGm+Ivs=: 00:21:07.349 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.349 08:07:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:07.349 08:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.349 08:07:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.349 08:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.349 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:07.349 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:07.349 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:07.349 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:07.607 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:07.607 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.607 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:07.607 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:07.607 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:07.607 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.607 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.607 08:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.607 08:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.607 08:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.607 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:07.607 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:08.173 00:21:08.173 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.173 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.173 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.429 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.429 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.429 08:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.430 08:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.430 08:07:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.430 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.430 { 00:21:08.430 "cntlid": 113, 00:21:08.430 "qid": 0, 00:21:08.430 "state": "enabled", 00:21:08.430 "thread": "nvmf_tgt_poll_group_000", 00:21:08.430 "listen_address": { 00:21:08.430 "trtype": "TCP", 00:21:08.430 "adrfam": "IPv4", 00:21:08.430 "traddr": "10.0.0.2", 00:21:08.430 "trsvcid": "4420" 00:21:08.430 }, 00:21:08.430 "peer_address": { 00:21:08.430 "trtype": "TCP", 00:21:08.430 "adrfam": "IPv4", 00:21:08.430 "traddr": "10.0.0.1", 00:21:08.430 "trsvcid": "57604" 00:21:08.430 }, 00:21:08.430 "auth": { 00:21:08.430 "state": "completed", 00:21:08.430 "digest": "sha512", 00:21:08.430 "dhgroup": "ffdhe3072" 00:21:08.430 } 00:21:08.430 } 00:21:08.430 ]' 00:21:08.430 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.430 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:08.430 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.430 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:08.430 08:07:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.430 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.430 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.430 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.686 08:08:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:Y2UxMWYyYzEwNjJhZTg5NDQxNWFiZDU2OWU2MTNjMzU5ZjlhMjA2ZjU1ZDU2Yjc3INMyEQ==: --dhchap-ctrl-secret DHHC-1:03:MGFiMjAwNDdkNTdjOWQzMjliMmM3NmQwMGNhM2EwYWExYTk4OTZjMDViODJiYWViYTk4M2U3ZmRlMmZlMzM1YVwGEV4=: 00:21:09.616 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.616 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.616 08:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.616 08:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.616 08:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.616 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:09.616 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:09.616 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:09.873 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:09.873 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:09.873 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:09.873 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:09.873 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:09.873 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.873 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.873 08:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.873 08:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.873 08:08:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.873 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:09.873 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:10.437 00:21:10.437 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:10.437 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:10.437 08:08:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.693 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.693 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.693 08:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.693 08:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.693 08:08:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.693 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.693 { 00:21:10.693 "cntlid": 115, 00:21:10.693 "qid": 0, 00:21:10.693 "state": "enabled", 00:21:10.693 "thread": "nvmf_tgt_poll_group_000", 00:21:10.693 "listen_address": { 00:21:10.693 "trtype": "TCP", 00:21:10.693 "adrfam": "IPv4", 00:21:10.693 "traddr": "10.0.0.2", 00:21:10.693 "trsvcid": "4420" 00:21:10.693 }, 00:21:10.693 "peer_address": { 00:21:10.693 "trtype": "TCP", 00:21:10.693 "adrfam": "IPv4", 00:21:10.693 "traddr": "10.0.0.1", 00:21:10.693 "trsvcid": "57634" 00:21:10.693 }, 00:21:10.693 "auth": { 00:21:10.693 "state": "completed", 00:21:10.693 "digest": "sha512", 00:21:10.693 "dhgroup": "ffdhe3072" 00:21:10.693 } 00:21:10.693 } 00:21:10.693 ]' 00:21:10.693 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:10.693 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:10.693 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:10.693 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:10.693 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:10.693 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.693 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.693 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.950 08:08:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTQ5NTBhZWUyZWU0Y2I1NGY4YTllZDU3MjkxODJiZjRSO1eA: --dhchap-ctrl-secret DHHC-1:02:OTc1ZjllNWMwZjY1Y2M2ZmEwMGY4NThjNzUyZmFiNDE0YWNmOWFlNWQ0MWZmYThinZpsJw==: 00:21:11.882 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.882 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.882 08:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:11.882 08:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.882 08:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:11.882 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:11.882 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:11.882 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:12.140 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:12.140 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.140 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:12.140 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:12.140 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:12.140 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.140 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.140 08:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.140 08:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.140 08:08:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.140 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.140 08:08:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.705 00:21:12.705 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:12.705 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:12.705 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.961 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.961 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.961 08:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.961 08:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.961 08:08:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.961 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:12.961 { 00:21:12.961 "cntlid": 117, 00:21:12.961 "qid": 0, 00:21:12.961 "state": "enabled", 00:21:12.961 "thread": "nvmf_tgt_poll_group_000", 00:21:12.961 "listen_address": { 00:21:12.961 "trtype": "TCP", 00:21:12.961 "adrfam": "IPv4", 00:21:12.961 "traddr": "10.0.0.2", 00:21:12.961 "trsvcid": "4420" 00:21:12.961 }, 00:21:12.961 "peer_address": { 00:21:12.961 "trtype": "TCP", 00:21:12.961 "adrfam": "IPv4", 00:21:12.961 "traddr": "10.0.0.1", 00:21:12.961 "trsvcid": "57670" 00:21:12.961 }, 00:21:12.961 "auth": { 00:21:12.961 "state": "completed", 00:21:12.961 "digest": "sha512", 00:21:12.961 "dhgroup": "ffdhe3072" 00:21:12.961 } 00:21:12.962 } 00:21:12.962 ]' 00:21:12.962 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:12.962 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:12.962 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:12.962 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:12.962 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:12.962 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.962 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.962 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.220 08:08:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzljOGM2YTIwOGQxOWU0N2U1ZTdhMmNmZWU4NzhmNzE3Yjc2ZjAxZDFkMWFjOGUynRyBfA==: --dhchap-ctrl-secret DHHC-1:01:YTQyNDllZGYwOTA2OTgyMmVkMzYwMjQ3MTIzYjNhNmPyVN1I: 00:21:14.160 08:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.160 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.160 08:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:14.160 08:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.160 08:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.160 08:08:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.160 08:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:14.160 08:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.160 08:08:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:14.725 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:14.725 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:14.725 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:14.725 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:14.725 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:14.725 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.725 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:14.725 08:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:14.725 08:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.725 08:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:14.725 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:14.725 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:14.983 00:21:14.983 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:14.983 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.983 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.240 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:15.240 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:15.240 08:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.240 08:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.240 08:08:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.240 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:15.240 { 00:21:15.240 "cntlid": 119, 00:21:15.240 "qid": 0, 00:21:15.240 "state": "enabled", 00:21:15.240 "thread": "nvmf_tgt_poll_group_000", 00:21:15.240 "listen_address": { 00:21:15.240 "trtype": "TCP", 00:21:15.240 "adrfam": "IPv4", 00:21:15.240 "traddr": "10.0.0.2", 00:21:15.240 "trsvcid": "4420" 00:21:15.240 }, 00:21:15.240 "peer_address": { 00:21:15.240 "trtype": "TCP", 00:21:15.240 "adrfam": "IPv4", 00:21:15.240 "traddr": "10.0.0.1", 00:21:15.240 "trsvcid": "57684" 00:21:15.240 }, 00:21:15.240 "auth": { 00:21:15.240 "state": "completed", 00:21:15.240 "digest": "sha512", 00:21:15.240 "dhgroup": "ffdhe3072" 00:21:15.240 } 00:21:15.240 } 00:21:15.240 ]' 00:21:15.240 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:15.240 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:15.240 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:15.240 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:15.240 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:15.240 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.240 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.240 08:08:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.498 08:08:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkMmM0Njg5YTNlMWNkZjFmY2E5OTA4ZDNlMTkwYzA0MmVhZmUwYTI0MjA3NTBhNjMyNmMyNzQ0MDYxMzk3OGm+Ivs=: 00:21:16.432 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:16.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:16.690 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:16.690 08:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.690 08:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.690 08:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.690 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:16.690 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:16.690 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:16.690 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:16.947 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:16.948 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:16.948 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:16.948 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:16.948 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:16.948 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.948 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.948 08:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.948 08:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.948 08:08:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.948 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.948 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:17.206 00:21:17.206 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.206 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.206 08:08:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.463 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.463 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.463 08:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.463 08:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.463 08:08:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.463 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:17.463 { 00:21:17.463 "cntlid": 121, 00:21:17.463 "qid": 0, 00:21:17.463 "state": "enabled", 00:21:17.463 "thread": "nvmf_tgt_poll_group_000", 00:21:17.463 "listen_address": { 00:21:17.463 "trtype": "TCP", 00:21:17.463 "adrfam": "IPv4", 00:21:17.463 "traddr": "10.0.0.2", 00:21:17.463 "trsvcid": "4420" 00:21:17.463 }, 00:21:17.463 "peer_address": { 00:21:17.463 "trtype": "TCP", 00:21:17.463 "adrfam": "IPv4", 00:21:17.463 "traddr": "10.0.0.1", 00:21:17.463 "trsvcid": "35438" 00:21:17.463 }, 00:21:17.463 "auth": { 00:21:17.463 "state": "completed", 00:21:17.463 "digest": "sha512", 00:21:17.463 "dhgroup": "ffdhe4096" 00:21:17.463 } 00:21:17.463 } 00:21:17.463 ]' 00:21:17.463 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:17.463 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:17.463 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:17.463 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:17.463 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:17.721 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:17.721 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:17.721 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.978 08:08:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:Y2UxMWYyYzEwNjJhZTg5NDQxNWFiZDU2OWU2MTNjMzU5ZjlhMjA2ZjU1ZDU2Yjc3INMyEQ==: --dhchap-ctrl-secret DHHC-1:03:MGFiMjAwNDdkNTdjOWQzMjliMmM3NmQwMGNhM2EwYWExYTk4OTZjMDViODJiYWViYTk4M2U3ZmRlMmZlMzM1YVwGEV4=: 00:21:18.912 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.912 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.912 08:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.912 08:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.912 08:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.912 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:18.912 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:18.912 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:19.170 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:19.170 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.170 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:19.170 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:19.170 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:19.170 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.170 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.170 08:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.170 08:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.170 08:08:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.170 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.170 08:08:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:19.735 00:21:19.735 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:19.735 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:19.735 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.735 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.735 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.735 08:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.735 08:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.992 08:08:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.992 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:19.992 { 00:21:19.992 "cntlid": 123, 00:21:19.992 "qid": 0, 00:21:19.992 "state": "enabled", 00:21:19.992 "thread": "nvmf_tgt_poll_group_000", 00:21:19.992 "listen_address": { 00:21:19.992 "trtype": "TCP", 00:21:19.992 "adrfam": "IPv4", 00:21:19.992 "traddr": "10.0.0.2", 00:21:19.992 "trsvcid": "4420" 00:21:19.992 }, 00:21:19.992 "peer_address": { 00:21:19.992 "trtype": "TCP", 00:21:19.992 "adrfam": "IPv4", 00:21:19.992 "traddr": "10.0.0.1", 00:21:19.992 "trsvcid": "35470" 00:21:19.992 }, 00:21:19.992 "auth": { 00:21:19.992 "state": "completed", 00:21:19.992 "digest": "sha512", 00:21:19.992 "dhgroup": "ffdhe4096" 00:21:19.992 } 00:21:19.992 } 00:21:19.992 ]' 00:21:19.992 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:19.992 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.992 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:19.992 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:19.992 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:19.992 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.992 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.992 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.250 08:08:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTQ5NTBhZWUyZWU0Y2I1NGY4YTllZDU3MjkxODJiZjRSO1eA: --dhchap-ctrl-secret DHHC-1:02:OTc1ZjllNWMwZjY1Y2M2ZmEwMGY4NThjNzUyZmFiNDE0YWNmOWFlNWQ0MWZmYThinZpsJw==: 00:21:21.182 08:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.182 08:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.182 08:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.182 08:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.182 08:08:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.182 08:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.182 08:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:21.182 08:08:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:21.440 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:21.440 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:21.440 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:21.440 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:21.440 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:21.440 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.440 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.440 08:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.440 08:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.440 08:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.440 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:21.440 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.006 00:21:22.006 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.006 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.006 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.264 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.264 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.264 08:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.264 08:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.264 08:08:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.264 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.264 { 00:21:22.264 "cntlid": 125, 00:21:22.264 "qid": 0, 00:21:22.264 "state": "enabled", 00:21:22.264 "thread": "nvmf_tgt_poll_group_000", 00:21:22.264 "listen_address": { 00:21:22.264 "trtype": "TCP", 00:21:22.264 "adrfam": "IPv4", 00:21:22.264 "traddr": "10.0.0.2", 00:21:22.264 "trsvcid": "4420" 00:21:22.264 }, 00:21:22.264 "peer_address": { 00:21:22.264 "trtype": "TCP", 00:21:22.264 "adrfam": "IPv4", 00:21:22.264 "traddr": "10.0.0.1", 00:21:22.264 "trsvcid": "35492" 00:21:22.264 }, 00:21:22.264 "auth": { 00:21:22.264 "state": "completed", 00:21:22.264 "digest": "sha512", 00:21:22.264 "dhgroup": "ffdhe4096" 00:21:22.264 } 00:21:22.264 } 00:21:22.264 ]' 00:21:22.264 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.264 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.264 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.264 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:22.264 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.264 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.264 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.264 08:08:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.522 08:08:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzljOGM2YTIwOGQxOWU0N2U1ZTdhMmNmZWU4NzhmNzE3Yjc2ZjAxZDFkMWFjOGUynRyBfA==: --dhchap-ctrl-secret DHHC-1:01:YTQyNDllZGYwOTA2OTgyMmVkMzYwMjQ3MTIzYjNhNmPyVN1I: 00:21:23.455 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.714 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.714 08:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.714 08:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.714 08:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.714 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.714 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:23.714 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:23.972 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:23.972 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:23.972 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:23.972 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:23.972 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:23.972 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.972 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:23.972 08:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.972 08:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.972 08:08:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.972 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:23.972 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:24.230 00:21:24.230 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.230 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.230 08:08:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.487 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.487 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.487 08:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.487 08:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.487 08:08:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.487 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.487 { 00:21:24.487 "cntlid": 127, 00:21:24.487 "qid": 0, 00:21:24.487 "state": "enabled", 00:21:24.487 "thread": "nvmf_tgt_poll_group_000", 00:21:24.487 "listen_address": { 00:21:24.487 "trtype": "TCP", 00:21:24.487 "adrfam": "IPv4", 00:21:24.487 "traddr": "10.0.0.2", 00:21:24.487 "trsvcid": "4420" 00:21:24.487 }, 00:21:24.487 "peer_address": { 00:21:24.487 "trtype": "TCP", 00:21:24.487 "adrfam": "IPv4", 00:21:24.487 "traddr": "10.0.0.1", 00:21:24.487 "trsvcid": "35514" 00:21:24.487 }, 00:21:24.487 "auth": { 00:21:24.487 "state": "completed", 00:21:24.487 "digest": "sha512", 00:21:24.487 "dhgroup": "ffdhe4096" 00:21:24.487 } 00:21:24.487 } 00:21:24.487 ]' 00:21:24.487 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.487 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.487 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.487 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:24.487 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.487 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.487 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.487 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.052 08:08:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkMmM0Njg5YTNlMWNkZjFmY2E5OTA4ZDNlMTkwYzA0MmVhZmUwYTI0MjA3NTBhNjMyNmMyNzQ0MDYxMzk3OGm+Ivs=: 00:21:26.001 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.001 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.001 08:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.001 08:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.001 08:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.001 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:26.001 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.001 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:26.001 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:26.273 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:26.273 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.273 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:26.273 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:26.273 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:26.273 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.273 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.273 08:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.273 08:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.273 08:08:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.273 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.273 08:08:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:26.838 00:21:26.838 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.838 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.838 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.094 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.094 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.094 08:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.094 08:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.094 08:08:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.094 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.094 { 00:21:27.094 "cntlid": 129, 00:21:27.094 "qid": 0, 00:21:27.094 "state": "enabled", 00:21:27.094 "thread": "nvmf_tgt_poll_group_000", 00:21:27.094 "listen_address": { 00:21:27.094 "trtype": "TCP", 00:21:27.094 "adrfam": "IPv4", 00:21:27.094 "traddr": "10.0.0.2", 00:21:27.094 "trsvcid": "4420" 00:21:27.094 }, 00:21:27.094 "peer_address": { 00:21:27.094 "trtype": "TCP", 00:21:27.094 "adrfam": "IPv4", 00:21:27.094 "traddr": "10.0.0.1", 00:21:27.094 "trsvcid": "34528" 00:21:27.094 }, 00:21:27.094 "auth": { 00:21:27.094 "state": "completed", 00:21:27.094 "digest": "sha512", 00:21:27.094 "dhgroup": "ffdhe6144" 00:21:27.094 } 00:21:27.094 } 00:21:27.094 ]' 00:21:27.094 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.094 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.094 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.094 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:27.094 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.094 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.094 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.094 08:08:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.351 08:08:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:Y2UxMWYyYzEwNjJhZTg5NDQxNWFiZDU2OWU2MTNjMzU5ZjlhMjA2ZjU1ZDU2Yjc3INMyEQ==: --dhchap-ctrl-secret DHHC-1:03:MGFiMjAwNDdkNTdjOWQzMjliMmM3NmQwMGNhM2EwYWExYTk4OTZjMDViODJiYWViYTk4M2U3ZmRlMmZlMzM1YVwGEV4=: 00:21:28.301 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.301 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.301 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.301 08:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.301 08:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.559 08:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.559 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.559 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:28.559 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:28.817 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:28.817 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.817 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.817 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:28.817 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:28.817 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.817 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.817 08:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.817 08:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.817 08:08:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.817 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:28.817 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:29.383 00:21:29.383 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.383 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.383 08:08:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.641 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.641 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.641 08:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.641 08:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.641 08:08:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.641 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.641 { 00:21:29.641 "cntlid": 131, 00:21:29.641 "qid": 0, 00:21:29.641 "state": "enabled", 00:21:29.641 "thread": "nvmf_tgt_poll_group_000", 00:21:29.641 "listen_address": { 00:21:29.641 "trtype": "TCP", 00:21:29.641 "adrfam": "IPv4", 00:21:29.641 "traddr": "10.0.0.2", 00:21:29.641 "trsvcid": "4420" 00:21:29.641 }, 00:21:29.641 "peer_address": { 00:21:29.641 "trtype": "TCP", 00:21:29.641 "adrfam": "IPv4", 00:21:29.641 "traddr": "10.0.0.1", 00:21:29.641 "trsvcid": "34566" 00:21:29.641 }, 00:21:29.641 "auth": { 00:21:29.641 "state": "completed", 00:21:29.641 "digest": "sha512", 00:21:29.641 "dhgroup": "ffdhe6144" 00:21:29.641 } 00:21:29.641 } 00:21:29.641 ]' 00:21:29.641 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.641 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.641 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.641 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:29.641 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.641 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.641 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.641 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.899 08:08:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTQ5NTBhZWUyZWU0Y2I1NGY4YTllZDU3MjkxODJiZjRSO1eA: --dhchap-ctrl-secret DHHC-1:02:OTc1ZjllNWMwZjY1Y2M2ZmEwMGY4NThjNzUyZmFiNDE0YWNmOWFlNWQ0MWZmYThinZpsJw==: 00:21:30.832 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.832 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.832 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.832 08:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.832 08:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.832 08:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.832 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.832 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:30.832 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:31.090 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:31.090 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:31.090 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:31.090 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:31.090 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:31.090 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.090 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.090 08:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.090 08:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.090 08:08:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.090 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.090 08:08:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:31.668 00:21:31.668 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.668 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.668 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.924 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.925 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.925 08:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.925 08:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.925 08:08:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.925 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.925 { 00:21:31.925 "cntlid": 133, 00:21:31.925 "qid": 0, 00:21:31.925 "state": "enabled", 00:21:31.925 "thread": "nvmf_tgt_poll_group_000", 00:21:31.925 "listen_address": { 00:21:31.925 "trtype": "TCP", 00:21:31.925 "adrfam": "IPv4", 00:21:31.925 "traddr": "10.0.0.2", 00:21:31.925 "trsvcid": "4420" 00:21:31.925 }, 00:21:31.925 "peer_address": { 00:21:31.925 "trtype": "TCP", 00:21:31.925 "adrfam": "IPv4", 00:21:31.925 "traddr": "10.0.0.1", 00:21:31.925 "trsvcid": "34592" 00:21:31.925 }, 00:21:31.925 "auth": { 00:21:31.925 "state": "completed", 00:21:31.925 "digest": "sha512", 00:21:31.925 "dhgroup": "ffdhe6144" 00:21:31.925 } 00:21:31.925 } 00:21:31.925 ]' 00:21:31.925 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:32.183 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.183 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:32.183 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:32.183 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:32.183 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.183 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.183 08:08:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.441 08:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzljOGM2YTIwOGQxOWU0N2U1ZTdhMmNmZWU4NzhmNzE3Yjc2ZjAxZDFkMWFjOGUynRyBfA==: --dhchap-ctrl-secret DHHC-1:01:YTQyNDllZGYwOTA2OTgyMmVkMzYwMjQ3MTIzYjNhNmPyVN1I: 00:21:33.374 08:08:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.374 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.374 08:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.374 08:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.374 08:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.374 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:33.374 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:33.374 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:33.632 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:33.632 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.632 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.632 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:33.632 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:33.632 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.632 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:33.632 08:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.632 08:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.632 08:08:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.632 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:33.632 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:34.197 00:21:34.197 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:34.197 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:34.197 08:08:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.455 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.455 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.455 08:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:34.455 08:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.455 08:08:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:34.455 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:34.455 { 00:21:34.455 "cntlid": 135, 00:21:34.455 "qid": 0, 00:21:34.455 "state": "enabled", 00:21:34.455 "thread": "nvmf_tgt_poll_group_000", 00:21:34.455 "listen_address": { 00:21:34.455 "trtype": "TCP", 00:21:34.455 "adrfam": "IPv4", 00:21:34.455 "traddr": "10.0.0.2", 00:21:34.455 "trsvcid": "4420" 00:21:34.455 }, 00:21:34.455 "peer_address": { 00:21:34.455 "trtype": "TCP", 00:21:34.455 "adrfam": "IPv4", 00:21:34.455 "traddr": "10.0.0.1", 00:21:34.455 "trsvcid": "34604" 00:21:34.455 }, 00:21:34.455 "auth": { 00:21:34.455 "state": "completed", 00:21:34.455 "digest": "sha512", 00:21:34.455 "dhgroup": "ffdhe6144" 00:21:34.455 } 00:21:34.455 } 00:21:34.455 ]' 00:21:34.455 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:34.455 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:34.455 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:34.455 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:34.455 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:34.455 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.455 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.455 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.713 08:08:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkMmM0Njg5YTNlMWNkZjFmY2E5OTA4ZDNlMTkwYzA0MmVhZmUwYTI0MjA3NTBhNjMyNmMyNzQ0MDYxMzk3OGm+Ivs=: 00:21:36.088 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.088 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.088 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:36.088 08:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.088 08:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.088 08:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.088 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:36.088 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:36.088 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.088 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:36.088 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:21:36.088 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:36.088 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:36.088 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:36.088 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:36.088 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.088 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.088 08:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:36.088 08:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.088 08:08:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:36.088 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.088 08:08:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.021 00:21:37.021 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.021 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.021 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:37.280 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:37.280 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:37.280 08:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.280 08:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.280 08:08:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.280 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:37.280 { 00:21:37.280 "cntlid": 137, 00:21:37.280 "qid": 0, 00:21:37.280 "state": "enabled", 00:21:37.280 "thread": "nvmf_tgt_poll_group_000", 00:21:37.280 "listen_address": { 00:21:37.280 "trtype": "TCP", 00:21:37.280 "adrfam": "IPv4", 00:21:37.280 "traddr": "10.0.0.2", 00:21:37.280 "trsvcid": "4420" 00:21:37.280 }, 00:21:37.280 "peer_address": { 00:21:37.280 "trtype": "TCP", 00:21:37.280 "adrfam": "IPv4", 00:21:37.280 "traddr": "10.0.0.1", 00:21:37.280 "trsvcid": "38340" 00:21:37.280 }, 00:21:37.280 "auth": { 00:21:37.280 "state": "completed", 00:21:37.280 "digest": "sha512", 00:21:37.280 "dhgroup": "ffdhe8192" 00:21:37.280 } 00:21:37.280 } 00:21:37.280 ]' 00:21:37.280 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:37.280 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:37.280 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:37.280 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:37.280 08:08:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:37.538 08:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:37.538 08:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:37.538 08:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.538 08:08:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:Y2UxMWYyYzEwNjJhZTg5NDQxNWFiZDU2OWU2MTNjMzU5ZjlhMjA2ZjU1ZDU2Yjc3INMyEQ==: --dhchap-ctrl-secret DHHC-1:03:MGFiMjAwNDdkNTdjOWQzMjliMmM3NmQwMGNhM2EwYWExYTk4OTZjMDViODJiYWViYTk4M2U3ZmRlMmZlMzM1YVwGEV4=: 00:21:38.471 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:38.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:38.471 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.471 08:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.471 08:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.471 08:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.471 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:38.471 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:38.471 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:39.041 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:21:39.041 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.041 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:39.041 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:39.041 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:39.041 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.041 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.041 08:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.041 08:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.041 08:08:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.041 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.041 08:08:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.973 00:21:39.973 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:39.973 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:39.973 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.973 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.973 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.973 08:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.973 08:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.973 08:08:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.973 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:39.973 { 00:21:39.973 "cntlid": 139, 00:21:39.973 "qid": 0, 00:21:39.973 "state": "enabled", 00:21:39.973 "thread": "nvmf_tgt_poll_group_000", 00:21:39.973 "listen_address": { 00:21:39.973 "trtype": "TCP", 00:21:39.973 "adrfam": "IPv4", 00:21:39.973 "traddr": "10.0.0.2", 00:21:39.973 "trsvcid": "4420" 00:21:39.973 }, 00:21:39.973 "peer_address": { 00:21:39.973 "trtype": "TCP", 00:21:39.973 "adrfam": "IPv4", 00:21:39.973 "traddr": "10.0.0.1", 00:21:39.973 "trsvcid": "38364" 00:21:39.973 }, 00:21:39.973 "auth": { 00:21:39.973 "state": "completed", 00:21:39.973 "digest": "sha512", 00:21:39.973 "dhgroup": "ffdhe8192" 00:21:39.973 } 00:21:39.973 } 00:21:39.973 ]' 00:21:39.973 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:39.973 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.973 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.231 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:40.231 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.231 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.231 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.231 08:08:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.489 08:08:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YTQ5NTBhZWUyZWU0Y2I1NGY4YTllZDU3MjkxODJiZjRSO1eA: --dhchap-ctrl-secret DHHC-1:02:OTc1ZjllNWMwZjY1Y2M2ZmEwMGY4NThjNzUyZmFiNDE0YWNmOWFlNWQ0MWZmYThinZpsJw==: 00:21:41.459 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.459 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.459 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.459 08:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.459 08:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.459 08:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.459 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.459 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:41.459 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:41.716 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:21:41.716 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:41.716 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:41.716 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:41.716 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:41.716 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.716 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.716 08:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.716 08:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.716 08:08:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.716 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.716 08:08:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.649 00:21:42.649 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.649 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:42.649 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.907 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.907 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.907 08:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.907 08:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.907 08:08:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.907 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.907 { 00:21:42.907 "cntlid": 141, 00:21:42.907 "qid": 0, 00:21:42.907 "state": "enabled", 00:21:42.907 "thread": "nvmf_tgt_poll_group_000", 00:21:42.907 "listen_address": { 00:21:42.907 "trtype": "TCP", 00:21:42.907 "adrfam": "IPv4", 00:21:42.907 "traddr": "10.0.0.2", 00:21:42.907 "trsvcid": "4420" 00:21:42.907 }, 00:21:42.907 "peer_address": { 00:21:42.907 "trtype": "TCP", 00:21:42.907 "adrfam": "IPv4", 00:21:42.907 "traddr": "10.0.0.1", 00:21:42.907 "trsvcid": "38386" 00:21:42.907 }, 00:21:42.907 "auth": { 00:21:42.907 "state": "completed", 00:21:42.907 "digest": "sha512", 00:21:42.907 "dhgroup": "ffdhe8192" 00:21:42.907 } 00:21:42.907 } 00:21:42.907 ]' 00:21:42.907 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.907 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.907 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.907 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:42.907 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:43.164 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.164 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.164 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.422 08:08:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:MzljOGM2YTIwOGQxOWU0N2U1ZTdhMmNmZWU4NzhmNzE3Yjc2ZjAxZDFkMWFjOGUynRyBfA==: --dhchap-ctrl-secret DHHC-1:01:YTQyNDllZGYwOTA2OTgyMmVkMzYwMjQ3MTIzYjNhNmPyVN1I: 00:21:44.367 08:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.367 08:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.367 08:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.367 08:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.367 08:08:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.367 08:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.367 08:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:44.367 08:08:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:44.641 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:21:44.641 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.641 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:44.641 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:44.641 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:44.641 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.641 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:44.641 08:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.641 08:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.641 08:08:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.641 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:44.641 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:45.573 00:21:45.573 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:45.573 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.573 08:08:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.573 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.573 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.573 08:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.573 08:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.573 08:08:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.573 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.573 { 00:21:45.573 "cntlid": 143, 00:21:45.573 "qid": 0, 00:21:45.573 "state": "enabled", 00:21:45.573 "thread": "nvmf_tgt_poll_group_000", 00:21:45.573 "listen_address": { 00:21:45.573 "trtype": "TCP", 00:21:45.573 "adrfam": "IPv4", 00:21:45.573 "traddr": "10.0.0.2", 00:21:45.573 "trsvcid": "4420" 00:21:45.573 }, 00:21:45.573 "peer_address": { 00:21:45.573 "trtype": "TCP", 00:21:45.573 "adrfam": "IPv4", 00:21:45.573 "traddr": "10.0.0.1", 00:21:45.573 "trsvcid": "38428" 00:21:45.573 }, 00:21:45.573 "auth": { 00:21:45.573 "state": "completed", 00:21:45.573 "digest": "sha512", 00:21:45.573 "dhgroup": "ffdhe8192" 00:21:45.573 } 00:21:45.573 } 00:21:45.573 ]' 00:21:45.573 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.573 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.573 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.831 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:45.831 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.831 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.831 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.831 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.089 08:08:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkMmM0Njg5YTNlMWNkZjFmY2E5OTA4ZDNlMTkwYzA0MmVhZmUwYTI0MjA3NTBhNjMyNmMyNzQ0MDYxMzk3OGm+Ivs=: 00:21:47.021 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.021 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.021 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:47.021 08:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.021 08:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.021 08:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.021 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:47.021 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:21:47.021 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:21:47.021 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:47.021 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:47.021 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:47.279 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:21:47.279 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:47.279 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:47.279 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:47.279 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:47.279 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.279 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.279 08:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.279 08:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.279 08:08:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.279 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.279 08:08:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.210 00:21:48.210 08:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:48.210 08:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:48.210 08:08:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.467 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.467 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.467 08:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.467 08:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.467 08:08:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.467 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:48.467 { 00:21:48.467 "cntlid": 145, 00:21:48.467 "qid": 0, 00:21:48.467 "state": "enabled", 00:21:48.467 "thread": "nvmf_tgt_poll_group_000", 00:21:48.467 "listen_address": { 00:21:48.467 "trtype": "TCP", 00:21:48.467 "adrfam": "IPv4", 00:21:48.467 "traddr": "10.0.0.2", 00:21:48.467 "trsvcid": "4420" 00:21:48.467 }, 00:21:48.467 "peer_address": { 00:21:48.467 "trtype": "TCP", 00:21:48.467 "adrfam": "IPv4", 00:21:48.467 "traddr": "10.0.0.1", 00:21:48.467 "trsvcid": "46136" 00:21:48.467 }, 00:21:48.467 "auth": { 00:21:48.467 "state": "completed", 00:21:48.467 "digest": "sha512", 00:21:48.467 "dhgroup": "ffdhe8192" 00:21:48.467 } 00:21:48.467 } 00:21:48.467 ]' 00:21:48.467 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:48.467 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.467 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:48.467 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:48.467 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:48.467 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.467 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.468 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.724 08:08:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:Y2UxMWYyYzEwNjJhZTg5NDQxNWFiZDU2OWU2MTNjMzU5ZjlhMjA2ZjU1ZDU2Yjc3INMyEQ==: --dhchap-ctrl-secret DHHC-1:03:MGFiMjAwNDdkNTdjOWQzMjliMmM3NmQwMGNhM2EwYWExYTk4OTZjMDViODJiYWViYTk4M2U3ZmRlMmZlMzM1YVwGEV4=: 00:21:49.655 08:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.655 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.655 08:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.655 08:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.655 08:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.655 08:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.655 08:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:49.655 08:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.655 08:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.961 08:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.961 08:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:49.961 08:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:49.961 08:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:49.961 08:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:49.961 08:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:49.961 08:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:49.961 08:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:49.961 08:08:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:49.961 08:08:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:21:50.527 request: 00:21:50.527 { 00:21:50.527 "name": "nvme0", 00:21:50.527 "trtype": "tcp", 00:21:50.527 "traddr": "10.0.0.2", 00:21:50.527 "adrfam": "ipv4", 00:21:50.527 "trsvcid": "4420", 00:21:50.527 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:50.527 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:50.527 "prchk_reftag": false, 00:21:50.527 "prchk_guard": false, 00:21:50.527 "hdgst": false, 00:21:50.527 "ddgst": false, 00:21:50.527 "dhchap_key": "key2", 00:21:50.527 "method": "bdev_nvme_attach_controller", 00:21:50.527 "req_id": 1 00:21:50.527 } 00:21:50.527 Got JSON-RPC error response 00:21:50.527 response: 00:21:50.527 { 00:21:50.527 "code": -5, 00:21:50.527 "message": "Input/output error" 00:21:50.527 } 00:21:50.527 08:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:50.527 08:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:50.527 08:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:50.527 08:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:50.527 08:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:50.527 08:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.527 08:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.527 08:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.527 08:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.527 08:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:50.527 08:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.527 08:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:50.527 08:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:50.527 08:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:50.527 08:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:50.527 08:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:50.527 08:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.527 08:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:50.527 08:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:50.527 08:08:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:50.527 08:08:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:51.460 request: 00:21:51.460 { 00:21:51.460 "name": "nvme0", 00:21:51.460 "trtype": "tcp", 00:21:51.460 "traddr": "10.0.0.2", 00:21:51.460 "adrfam": "ipv4", 00:21:51.460 "trsvcid": "4420", 00:21:51.460 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:51.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:51.460 "prchk_reftag": false, 00:21:51.460 "prchk_guard": false, 00:21:51.460 "hdgst": false, 00:21:51.460 "ddgst": false, 00:21:51.460 "dhchap_key": "key1", 00:21:51.460 "dhchap_ctrlr_key": "ckey2", 00:21:51.460 "method": "bdev_nvme_attach_controller", 00:21:51.460 "req_id": 1 00:21:51.460 } 00:21:51.460 Got JSON-RPC error response 00:21:51.460 response: 00:21:51.460 { 00:21:51.460 "code": -5, 00:21:51.460 "message": "Input/output error" 00:21:51.460 } 00:21:51.460 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:51.460 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:51.460 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:51.460 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:51.460 08:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.460 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.460 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.460 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.460 08:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:21:51.460 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.460 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.460 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.460 08:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.460 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:51.460 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.460 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:51.460 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:51.460 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:51.460 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:51.460 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:51.460 08:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:52.393 request: 00:21:52.393 { 00:21:52.393 "name": "nvme0", 00:21:52.393 "trtype": "tcp", 00:21:52.393 "traddr": "10.0.0.2", 00:21:52.393 "adrfam": "ipv4", 00:21:52.393 "trsvcid": "4420", 00:21:52.393 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:52.393 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:52.393 "prchk_reftag": false, 00:21:52.393 "prchk_guard": false, 00:21:52.393 "hdgst": false, 00:21:52.393 "ddgst": false, 00:21:52.393 "dhchap_key": "key1", 00:21:52.393 "dhchap_ctrlr_key": "ckey1", 00:21:52.393 "method": "bdev_nvme_attach_controller", 00:21:52.393 "req_id": 1 00:21:52.393 } 00:21:52.393 Got JSON-RPC error response 00:21:52.393 response: 00:21:52.393 { 00:21:52.393 "code": -5, 00:21:52.393 "message": "Input/output error" 00:21:52.393 } 00:21:52.393 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:52.393 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:52.393 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:52.393 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:52.393 08:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:52.393 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.393 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.393 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.393 08:08:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1964658 00:21:52.393 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1964658 ']' 00:21:52.393 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1964658 00:21:52.393 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:52.393 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:52.393 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1964658 00:21:52.393 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:52.393 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:52.393 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1964658' 00:21:52.393 killing process with pid 1964658 00:21:52.393 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1964658 00:21:52.393 08:08:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1964658 00:21:52.651 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:52.651 08:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:52.651 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:52.651 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.651 08:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1987158 00:21:52.651 08:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:52.651 08:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1987158 00:21:52.651 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1987158 ']' 00:21:52.651 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.651 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:52.651 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.651 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:52.651 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.910 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:52.910 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:52.910 08:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:52.910 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:52.910 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.910 08:08:44 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:52.910 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:52.910 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1987158 00:21:52.910 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1987158 ']' 00:21:52.910 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.910 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:52.910 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.910 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:52.910 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.168 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:53.168 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:21:53.168 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:21:53.168 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.168 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.168 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.168 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:21:53.168 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.168 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:53.168 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:53.168 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:53.168 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.168 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:53.168 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.168 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.168 08:08:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.168 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:53.168 08:08:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:54.101 00:21:54.101 08:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:54.101 08:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.101 08:08:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.359 08:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.359 08:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.359 08:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.359 08:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.359 08:08:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.359 08:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:54.359 { 00:21:54.359 "cntlid": 1, 00:21:54.359 "qid": 0, 00:21:54.359 "state": "enabled", 00:21:54.359 "thread": "nvmf_tgt_poll_group_000", 00:21:54.359 "listen_address": { 00:21:54.359 "trtype": "TCP", 00:21:54.359 "adrfam": "IPv4", 00:21:54.359 "traddr": "10.0.0.2", 00:21:54.359 "trsvcid": "4420" 00:21:54.359 }, 00:21:54.359 "peer_address": { 00:21:54.359 "trtype": "TCP", 00:21:54.359 "adrfam": "IPv4", 00:21:54.359 "traddr": "10.0.0.1", 00:21:54.359 "trsvcid": "46180" 00:21:54.359 }, 00:21:54.359 "auth": { 00:21:54.359 "state": "completed", 00:21:54.359 "digest": "sha512", 00:21:54.359 "dhgroup": "ffdhe8192" 00:21:54.359 } 00:21:54.359 } 00:21:54.359 ]' 00:21:54.359 08:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:54.359 08:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.359 08:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:54.617 08:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:54.617 08:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:54.617 08:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.617 08:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.617 08:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.875 08:08:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:YThkMmM0Njg5YTNlMWNkZjFmY2E5OTA4ZDNlMTkwYzA0MmVhZmUwYTI0MjA3NTBhNjMyNmMyNzQ0MDYxMzk3OGm+Ivs=: 00:21:55.808 08:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.808 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.808 08:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.808 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.808 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.808 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.808 08:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:55.808 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.808 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.808 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.808 08:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:55.808 08:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:56.066 08:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.066 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:56.066 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.066 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:56.066 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:56.066 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:56.066 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:56.066 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.066 08:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.324 request: 00:21:56.324 { 00:21:56.324 "name": "nvme0", 00:21:56.324 "trtype": "tcp", 00:21:56.324 "traddr": "10.0.0.2", 00:21:56.324 "adrfam": "ipv4", 00:21:56.324 "trsvcid": "4420", 00:21:56.324 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:56.324 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:56.324 "prchk_reftag": false, 00:21:56.324 "prchk_guard": false, 00:21:56.324 "hdgst": false, 00:21:56.324 "ddgst": false, 00:21:56.324 "dhchap_key": "key3", 00:21:56.324 "method": "bdev_nvme_attach_controller", 00:21:56.324 "req_id": 1 00:21:56.324 } 00:21:56.324 Got JSON-RPC error response 00:21:56.324 response: 00:21:56.324 { 00:21:56.324 "code": -5, 00:21:56.324 "message": "Input/output error" 00:21:56.324 } 00:21:56.324 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:56.324 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:56.324 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:56.324 08:08:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:56.324 08:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:21:56.324 08:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:21:56.324 08:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:56.324 08:08:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:56.582 08:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.582 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:56.582 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.582 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:56.582 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:56.582 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:56.582 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:56.582 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.582 08:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:56.840 request: 00:21:56.840 { 00:21:56.840 "name": "nvme0", 00:21:56.840 "trtype": "tcp", 00:21:56.840 "traddr": "10.0.0.2", 00:21:56.840 "adrfam": "ipv4", 00:21:56.840 "trsvcid": "4420", 00:21:56.840 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:56.840 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:56.840 "prchk_reftag": false, 00:21:56.840 "prchk_guard": false, 00:21:56.840 "hdgst": false, 00:21:56.840 "ddgst": false, 00:21:56.840 "dhchap_key": "key3", 00:21:56.840 "method": "bdev_nvme_attach_controller", 00:21:56.840 "req_id": 1 00:21:56.840 } 00:21:56.840 Got JSON-RPC error response 00:21:56.840 response: 00:21:56.840 { 00:21:56.840 "code": -5, 00:21:56.840 "message": "Input/output error" 00:21:56.840 } 00:21:56.840 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:56.840 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:56.840 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:56.840 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:56.840 08:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:56.840 08:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:21:56.840 08:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:21:56.840 08:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:56.840 08:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:56.840 08:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:57.098 08:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.098 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.098 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.098 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.098 08:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.098 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.098 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.098 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.098 08:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:57.098 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:21:57.098 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:57.098 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:21:57.098 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.098 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:21:57.098 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.098 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:57.098 08:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:57.356 request: 00:21:57.356 { 00:21:57.356 "name": "nvme0", 00:21:57.356 "trtype": "tcp", 00:21:57.356 "traddr": "10.0.0.2", 00:21:57.356 "adrfam": "ipv4", 00:21:57.356 "trsvcid": "4420", 00:21:57.356 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:57.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:57.356 "prchk_reftag": false, 00:21:57.356 "prchk_guard": false, 00:21:57.356 "hdgst": false, 00:21:57.356 "ddgst": false, 00:21:57.356 "dhchap_key": "key0", 00:21:57.356 "dhchap_ctrlr_key": "key1", 00:21:57.356 "method": "bdev_nvme_attach_controller", 00:21:57.356 "req_id": 1 00:21:57.356 } 00:21:57.356 Got JSON-RPC error response 00:21:57.356 response: 00:21:57.356 { 00:21:57.356 "code": -5, 00:21:57.356 "message": "Input/output error" 00:21:57.356 } 00:21:57.356 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:21:57.356 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:57.356 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:57.356 08:08:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:57.356 08:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:57.356 08:08:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:21:57.614 00:21:57.614 08:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:21:57.614 08:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:21:57.614 08:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.871 08:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.871 08:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.871 08:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.129 08:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:21:58.129 08:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:21:58.129 08:08:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1964680 00:21:58.129 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1964680 ']' 00:21:58.129 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1964680 00:21:58.129 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:58.129 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:58.129 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1964680 00:21:58.129 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:58.129 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:58.129 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1964680' 00:21:58.129 killing process with pid 1964680 00:21:58.129 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1964680 00:21:58.129 08:08:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1964680 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:58.695 rmmod nvme_tcp 00:21:58.695 rmmod nvme_fabrics 00:21:58.695 rmmod nvme_keyring 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1987158 ']' 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1987158 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1987158 ']' 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1987158 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1987158 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1987158' 00:21:58.695 killing process with pid 1987158 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1987158 00:21:58.695 08:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1987158 00:21:58.955 08:08:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:58.955 08:08:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:58.955 08:08:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:58.955 08:08:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:58.955 08:08:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:58.955 08:08:50 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:58.955 08:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:58.955 08:08:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:00.879 08:08:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:00.879 08:08:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.1V9 /tmp/spdk.key-sha256.0Cx /tmp/spdk.key-sha384.h3D /tmp/spdk.key-sha512.aBO /tmp/spdk.key-sha512.gn3 /tmp/spdk.key-sha384.GXs /tmp/spdk.key-sha256.8GK '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:00.879 00:22:00.879 real 3m8.656s 00:22:00.879 user 7m19.304s 00:22:00.879 sys 0m24.873s 00:22:00.879 08:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:00.879 08:08:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.879 ************************************ 00:22:00.879 END TEST nvmf_auth_target 00:22:00.879 ************************************ 00:22:00.879 08:08:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:00.879 08:08:52 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:00.879 08:08:52 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:00.879 08:08:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:00.879 08:08:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:00.879 08:08:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:00.879 ************************************ 00:22:00.879 START TEST nvmf_bdevio_no_huge 00:22:00.879 ************************************ 00:22:00.879 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:01.139 * Looking for test storage... 00:22:01.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:01.139 08:08:52 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:03.095 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:03.095 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:03.095 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:03.095 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:03.095 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:03.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:03.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.163 ms 00:22:03.096 00:22:03.096 --- 10.0.0.2 ping statistics --- 00:22:03.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.096 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:03.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:03.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.109 ms 00:22:03.096 00:22:03.096 --- 10.0.0.1 ping statistics --- 00:22:03.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:03.096 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1989804 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1989804 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1989804 ']' 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:03.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:03.096 08:08:54 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:03.096 [2024-07-13 08:08:54.797400] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:03.096 [2024-07-13 08:08:54.797488] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:03.354 [2024-07-13 08:08:54.876245] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:03.354 [2024-07-13 08:08:54.968228] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:03.354 [2024-07-13 08:08:54.968283] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:03.354 [2024-07-13 08:08:54.968300] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:03.354 [2024-07-13 08:08:54.968313] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:03.354 [2024-07-13 08:08:54.968325] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:03.354 [2024-07-13 08:08:54.968415] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:03.354 [2024-07-13 08:08:54.968531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:03.354 [2024-07-13 08:08:54.968905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:03.354 [2024-07-13 08:08:54.968924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:03.354 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:03.354 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:22:03.354 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:03.354 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:03.354 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:03.613 [2024-07-13 08:08:55.097210] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:03.613 Malloc0 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:03.613 [2024-07-13 08:08:55.135648] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:03.613 { 00:22:03.613 "params": { 00:22:03.613 "name": "Nvme$subsystem", 00:22:03.613 "trtype": "$TEST_TRANSPORT", 00:22:03.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:03.613 "adrfam": "ipv4", 00:22:03.613 "trsvcid": "$NVMF_PORT", 00:22:03.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:03.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:03.613 "hdgst": ${hdgst:-false}, 00:22:03.613 "ddgst": ${ddgst:-false} 00:22:03.613 }, 00:22:03.613 "method": "bdev_nvme_attach_controller" 00:22:03.613 } 00:22:03.613 EOF 00:22:03.613 )") 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:03.613 08:08:55 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:03.613 "params": { 00:22:03.613 "name": "Nvme1", 00:22:03.613 "trtype": "tcp", 00:22:03.613 "traddr": "10.0.0.2", 00:22:03.613 "adrfam": "ipv4", 00:22:03.613 "trsvcid": "4420", 00:22:03.613 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:03.613 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:03.613 "hdgst": false, 00:22:03.613 "ddgst": false 00:22:03.613 }, 00:22:03.613 "method": "bdev_nvme_attach_controller" 00:22:03.613 }' 00:22:03.613 [2024-07-13 08:08:55.182389] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:03.613 [2024-07-13 08:08:55.182460] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1989837 ] 00:22:03.613 [2024-07-13 08:08:55.241770] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:03.613 [2024-07-13 08:08:55.329385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:03.613 [2024-07-13 08:08:55.329436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:03.613 [2024-07-13 08:08:55.329439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:04.179 I/O targets: 00:22:04.179 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:04.179 00:22:04.179 00:22:04.179 CUnit - A unit testing framework for C - Version 2.1-3 00:22:04.179 http://cunit.sourceforge.net/ 00:22:04.179 00:22:04.179 00:22:04.179 Suite: bdevio tests on: Nvme1n1 00:22:04.179 Test: blockdev write read block ...passed 00:22:04.179 Test: blockdev write zeroes read block ...passed 00:22:04.179 Test: blockdev write zeroes read no split ...passed 00:22:04.179 Test: blockdev write zeroes read split ...passed 00:22:04.179 Test: blockdev write zeroes read split partial ...passed 00:22:04.179 Test: blockdev reset ...[2024-07-13 08:08:55.849037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:04.179 [2024-07-13 08:08:55.849158] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc994e0 (9): Bad file descriptor 00:22:04.437 [2024-07-13 08:08:55.990468] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:04.437 passed 00:22:04.437 Test: blockdev write read 8 blocks ...passed 00:22:04.437 Test: blockdev write read size > 128k ...passed 00:22:04.437 Test: blockdev write read invalid size ...passed 00:22:04.437 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:04.437 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:04.437 Test: blockdev write read max offset ...passed 00:22:04.437 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:04.437 Test: blockdev writev readv 8 blocks ...passed 00:22:04.696 Test: blockdev writev readv 30 x 1block ...passed 00:22:04.696 Test: blockdev writev readv block ...passed 00:22:04.696 Test: blockdev writev readv size > 128k ...passed 00:22:04.696 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:04.696 Test: blockdev comparev and writev ...[2024-07-13 08:08:56.248748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:04.696 [2024-07-13 08:08:56.248783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:04.696 [2024-07-13 08:08:56.248807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:04.696 [2024-07-13 08:08:56.248824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:04.696 [2024-07-13 08:08:56.249184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:04.696 [2024-07-13 08:08:56.249209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:04.696 [2024-07-13 08:08:56.249230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:04.696 [2024-07-13 08:08:56.249246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:04.696 [2024-07-13 08:08:56.249587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:04.696 [2024-07-13 08:08:56.249611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:04.696 [2024-07-13 08:08:56.249632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:04.696 [2024-07-13 08:08:56.249648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:04.696 [2024-07-13 08:08:56.249984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:04.696 [2024-07-13 08:08:56.250008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:04.696 [2024-07-13 08:08:56.250028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:04.696 [2024-07-13 08:08:56.250044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:04.696 passed 00:22:04.696 Test: blockdev nvme passthru rw ...passed 00:22:04.696 Test: blockdev nvme passthru vendor specific ...[2024-07-13 08:08:56.334156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:04.696 [2024-07-13 08:08:56.334182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:04.696 [2024-07-13 08:08:56.334355] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:04.696 [2024-07-13 08:08:56.334384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:04.696 [2024-07-13 08:08:56.334558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:04.696 [2024-07-13 08:08:56.334581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:04.696 [2024-07-13 08:08:56.334746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:04.696 [2024-07-13 08:08:56.334768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:04.696 passed 00:22:04.696 Test: blockdev nvme admin passthru ...passed 00:22:04.696 Test: blockdev copy ...passed 00:22:04.696 00:22:04.696 Run Summary: Type Total Ran Passed Failed Inactive 00:22:04.696 suites 1 1 n/a 0 0 00:22:04.696 tests 23 23 23 0 0 00:22:04.696 asserts 152 152 152 0 n/a 00:22:04.696 00:22:04.696 Elapsed time = 1.420 seconds 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:05.269 rmmod nvme_tcp 00:22:05.269 rmmod nvme_fabrics 00:22:05.269 rmmod nvme_keyring 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1989804 ']' 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1989804 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1989804 ']' 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1989804 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1989804 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1989804' 00:22:05.269 killing process with pid 1989804 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1989804 00:22:05.269 08:08:56 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1989804 00:22:05.526 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:05.526 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:05.526 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:05.526 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:05.526 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:05.526 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:05.526 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:05.526 08:08:57 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.056 08:08:59 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:08.056 00:22:08.056 real 0m6.650s 00:22:08.056 user 0m11.998s 00:22:08.056 sys 0m2.545s 00:22:08.056 08:08:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:08.056 08:08:59 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:08.056 ************************************ 00:22:08.056 END TEST nvmf_bdevio_no_huge 00:22:08.056 ************************************ 00:22:08.056 08:08:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:08.057 08:08:59 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:08.057 08:08:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:08.057 08:08:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:08.057 08:08:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:08.057 ************************************ 00:22:08.057 START TEST nvmf_tls 00:22:08.057 ************************************ 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:08.057 * Looking for test storage... 00:22:08.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:08.057 08:08:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.957 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.957 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:09.957 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:09.957 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:09.957 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:09.957 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:09.957 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:09.957 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:09.957 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:09.957 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:09.957 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:09.957 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:09.957 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:09.957 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:09.957 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:09.957 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.957 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.957 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.957 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.957 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.957 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:09.958 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:09.958 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:09.958 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:09.958 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:09.958 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.958 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:22:09.958 00:22:09.958 --- 10.0.0.2 ping statistics --- 00:22:09.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.958 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:09.958 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.958 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:22:09.958 00:22:09.958 --- 10.0.0.1 ping statistics --- 00:22:09.958 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.958 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1992105 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1992105 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1992105 ']' 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.958 [2024-07-13 08:09:01.423744] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:09.958 [2024-07-13 08:09:01.423848] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.958 EAL: No free 2048 kB hugepages reported on node 1 00:22:09.958 [2024-07-13 08:09:01.492717] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.958 [2024-07-13 08:09:01.587181] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.958 [2024-07-13 08:09:01.587254] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.958 [2024-07-13 08:09:01.587281] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.958 [2024-07-13 08:09:01.587295] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.958 [2024-07-13 08:09:01.587307] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.958 [2024-07-13 08:09:01.587336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:09.958 08:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:10.216 true 00:22:10.216 08:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:10.217 08:09:01 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:10.474 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:10.474 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:10.474 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:10.731 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:10.731 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:10.989 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:10.989 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:10.989 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:11.246 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:11.246 08:09:02 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:11.503 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:11.503 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:11.503 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:11.503 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:11.760 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:11.760 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:11.760 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:12.017 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:12.017 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:12.275 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:12.275 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:12.275 08:09:03 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:12.532 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:12.533 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:12.790 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:12.790 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:12.790 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:12.790 08:09:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:12.790 08:09:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:12.790 08:09:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:12.790 08:09:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:12.790 08:09:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:12.790 08:09:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:12.790 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:12.790 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:12.790 08:09:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:12.790 08:09:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:12.790 08:09:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:12.790 08:09:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:12.790 08:09:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:12.790 08:09:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:13.047 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:13.047 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:13.047 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.G0CmMNDdTO 00:22:13.047 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:13.047 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.VOjR311FPg 00:22:13.047 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:13.047 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:13.047 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.G0CmMNDdTO 00:22:13.047 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.VOjR311FPg 00:22:13.047 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:13.305 08:09:04 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:13.562 08:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.G0CmMNDdTO 00:22:13.562 08:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.G0CmMNDdTO 00:22:13.562 08:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:13.820 [2024-07-13 08:09:05.513772] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.820 08:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:14.392 08:09:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:14.392 [2024-07-13 08:09:06.067252] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:14.392 [2024-07-13 08:09:06.067492] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.392 08:09:06 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:14.672 malloc0 00:22:14.672 08:09:06 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:14.929 08:09:06 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G0CmMNDdTO 00:22:15.187 [2024-07-13 08:09:06.820565] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:15.187 08:09:06 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.G0CmMNDdTO 00:22:15.187 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.380 Initializing NVMe Controllers 00:22:27.380 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:27.380 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:27.380 Initialization complete. Launching workers. 00:22:27.380 ======================================================== 00:22:27.380 Latency(us) 00:22:27.380 Device Information : IOPS MiB/s Average min max 00:22:27.380 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7746.27 30.26 8264.42 1262.13 9860.15 00:22:27.380 ======================================================== 00:22:27.380 Total : 7746.27 30.26 8264.42 1262.13 9860.15 00:22:27.380 00:22:27.380 08:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.G0CmMNDdTO 00:22:27.380 08:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:27.380 08:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:27.380 08:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:27.380 08:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.G0CmMNDdTO' 00:22:27.380 08:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:27.380 08:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1994417 00:22:27.380 08:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:27.380 08:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:27.380 08:09:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1994417 /var/tmp/bdevperf.sock 00:22:27.380 08:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1994417 ']' 00:22:27.380 08:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:27.380 08:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:27.380 08:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:27.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:27.380 08:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:27.380 08:09:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:27.380 [2024-07-13 08:09:16.984890] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:27.380 [2024-07-13 08:09:16.984984] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1994417 ] 00:22:27.380 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.381 [2024-07-13 08:09:17.046731] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.381 [2024-07-13 08:09:17.132093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:27.381 08:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:27.381 08:09:17 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:27.381 08:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G0CmMNDdTO 00:22:27.381 [2024-07-13 08:09:17.509532] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:27.381 [2024-07-13 08:09:17.509663] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:27.381 TLSTESTn1 00:22:27.381 08:09:17 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:27.381 Running I/O for 10 seconds... 00:22:37.341 00:22:37.341 Latency(us) 00:22:37.341 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.341 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:37.341 Verification LBA range: start 0x0 length 0x2000 00:22:37.341 TLSTESTn1 : 10.04 3107.14 12.14 0.00 0.00 41094.26 7767.23 59807.67 00:22:37.341 =================================================================================================================== 00:22:37.341 Total : 3107.14 12.14 0.00 0.00 41094.26 7767.23 59807.67 00:22:37.341 0 00:22:37.341 08:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:37.341 08:09:27 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1994417 00:22:37.341 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1994417 ']' 00:22:37.341 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1994417 00:22:37.341 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:37.341 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:37.341 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1994417 00:22:37.341 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:37.341 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:37.341 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1994417' 00:22:37.341 killing process with pid 1994417 00:22:37.341 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1994417 00:22:37.342 Received shutdown signal, test time was about 10.000000 seconds 00:22:37.342 00:22:37.342 Latency(us) 00:22:37.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.342 =================================================================================================================== 00:22:37.342 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:37.342 [2024-07-13 08:09:27.829723] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:37.342 08:09:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1994417 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VOjR311FPg 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VOjR311FPg 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VOjR311FPg 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.VOjR311FPg' 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1995727 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1995727 /var/tmp/bdevperf.sock 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1995727 ']' 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:37.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.342 [2024-07-13 08:09:28.102737] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:37.342 [2024-07-13 08:09:28.102829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1995727 ] 00:22:37.342 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.342 [2024-07-13 08:09:28.161138] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.342 [2024-07-13 08:09:28.241608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VOjR311FPg 00:22:37.342 [2024-07-13 08:09:28.592727] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:37.342 [2024-07-13 08:09:28.592872] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:37.342 [2024-07-13 08:09:28.598330] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:37.342 [2024-07-13 08:09:28.598724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64bab0 (107): Transport endpoint is not connected 00:22:37.342 [2024-07-13 08:09:28.599712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x64bab0 (9): Bad file descriptor 00:22:37.342 [2024-07-13 08:09:28.600712] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:37.342 [2024-07-13 08:09:28.600733] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:37.342 [2024-07-13 08:09:28.600752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:37.342 request: 00:22:37.342 { 00:22:37.342 "name": "TLSTEST", 00:22:37.342 "trtype": "tcp", 00:22:37.342 "traddr": "10.0.0.2", 00:22:37.342 "adrfam": "ipv4", 00:22:37.342 "trsvcid": "4420", 00:22:37.342 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.342 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:37.342 "prchk_reftag": false, 00:22:37.342 "prchk_guard": false, 00:22:37.342 "hdgst": false, 00:22:37.342 "ddgst": false, 00:22:37.342 "psk": "/tmp/tmp.VOjR311FPg", 00:22:37.342 "method": "bdev_nvme_attach_controller", 00:22:37.342 "req_id": 1 00:22:37.342 } 00:22:37.342 Got JSON-RPC error response 00:22:37.342 response: 00:22:37.342 { 00:22:37.342 "code": -5, 00:22:37.342 "message": "Input/output error" 00:22:37.342 } 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1995727 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1995727 ']' 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1995727 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1995727 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1995727' 00:22:37.342 killing process with pid 1995727 00:22:37.342 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1995727 00:22:37.342 Received shutdown signal, test time was about 10.000000 seconds 00:22:37.342 00:22:37.342 Latency(us) 00:22:37.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.342 =================================================================================================================== 00:22:37.342 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:37.343 [2024-07-13 08:09:28.654361] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1995727 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.G0CmMNDdTO 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.G0CmMNDdTO 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.G0CmMNDdTO 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.G0CmMNDdTO' 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1995862 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1995862 /var/tmp/bdevperf.sock 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1995862 ']' 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:37.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:37.343 08:09:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:37.343 [2024-07-13 08:09:28.921380] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:37.343 [2024-07-13 08:09:28.921475] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1995862 ] 00:22:37.343 EAL: No free 2048 kB hugepages reported on node 1 00:22:37.343 [2024-07-13 08:09:28.980254] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.343 [2024-07-13 08:09:29.065481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:37.600 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:37.600 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:37.600 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.G0CmMNDdTO 00:22:37.858 [2024-07-13 08:09:29.447385] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:37.858 [2024-07-13 08:09:29.447507] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:37.858 [2024-07-13 08:09:29.457648] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:37.858 [2024-07-13 08:09:29.457685] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:22:37.858 [2024-07-13 08:09:29.457743] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:37.858 [2024-07-13 08:09:29.458425] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x132cab0 (107): Transport endpoint is not connected 00:22:37.858 [2024-07-13 08:09:29.459399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x132cab0 (9): Bad file descriptor 00:22:37.858 [2024-07-13 08:09:29.460399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:37.858 [2024-07-13 08:09:29.460441] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:37.858 [2024-07-13 08:09:29.460461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:37.858 request: 00:22:37.858 { 00:22:37.858 "name": "TLSTEST", 00:22:37.858 "trtype": "tcp", 00:22:37.858 "traddr": "10.0.0.2", 00:22:37.858 "adrfam": "ipv4", 00:22:37.858 "trsvcid": "4420", 00:22:37.858 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:37.858 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:37.858 "prchk_reftag": false, 00:22:37.858 "prchk_guard": false, 00:22:37.858 "hdgst": false, 00:22:37.858 "ddgst": false, 00:22:37.858 "psk": "/tmp/tmp.G0CmMNDdTO", 00:22:37.858 "method": "bdev_nvme_attach_controller", 00:22:37.858 "req_id": 1 00:22:37.858 } 00:22:37.858 Got JSON-RPC error response 00:22:37.858 response: 00:22:37.858 { 00:22:37.858 "code": -5, 00:22:37.858 "message": "Input/output error" 00:22:37.858 } 00:22:37.858 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1995862 00:22:37.858 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1995862 ']' 00:22:37.858 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1995862 00:22:37.858 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:37.858 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:37.858 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1995862 00:22:37.858 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:37.858 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:37.858 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1995862' 00:22:37.858 killing process with pid 1995862 00:22:37.858 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1995862 00:22:37.858 Received shutdown signal, test time was about 10.000000 seconds 00:22:37.858 00:22:37.858 Latency(us) 00:22:37.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.858 =================================================================================================================== 00:22:37.858 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:37.858 [2024-07-13 08:09:29.512986] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:37.858 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1995862 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.G0CmMNDdTO 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.G0CmMNDdTO 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.G0CmMNDdTO 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.G0CmMNDdTO' 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1996001 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1996001 /var/tmp/bdevperf.sock 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1996001 ']' 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:38.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.116 08:09:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.116 [2024-07-13 08:09:29.766270] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:38.116 [2024-07-13 08:09:29.766361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1996001 ] 00:22:38.116 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.116 [2024-07-13 08:09:29.826475] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.374 [2024-07-13 08:09:29.909793] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.374 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:38.374 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:38.374 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.G0CmMNDdTO 00:22:38.632 [2024-07-13 08:09:30.233929] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:38.632 [2024-07-13 08:09:30.234064] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:38.632 [2024-07-13 08:09:30.239709] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:38.632 [2024-07-13 08:09:30.239765] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:22:38.632 [2024-07-13 08:09:30.239821] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:38.632 [2024-07-13 08:09:30.240017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2343ab0 (107): Transport endpoint is not connected 00:22:38.632 [2024-07-13 08:09:30.241002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2343ab0 (9): Bad file descriptor 00:22:38.632 [2024-07-13 08:09:30.242002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:22:38.632 [2024-07-13 08:09:30.242025] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:38.632 [2024-07-13 08:09:30.242044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:22:38.632 request: 00:22:38.632 { 00:22:38.632 "name": "TLSTEST", 00:22:38.632 "trtype": "tcp", 00:22:38.632 "traddr": "10.0.0.2", 00:22:38.632 "adrfam": "ipv4", 00:22:38.632 "trsvcid": "4420", 00:22:38.632 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:38.632 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:38.632 "prchk_reftag": false, 00:22:38.632 "prchk_guard": false, 00:22:38.632 "hdgst": false, 00:22:38.632 "ddgst": false, 00:22:38.632 "psk": "/tmp/tmp.G0CmMNDdTO", 00:22:38.632 "method": "bdev_nvme_attach_controller", 00:22:38.632 "req_id": 1 00:22:38.632 } 00:22:38.632 Got JSON-RPC error response 00:22:38.632 response: 00:22:38.632 { 00:22:38.632 "code": -5, 00:22:38.632 "message": "Input/output error" 00:22:38.632 } 00:22:38.632 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1996001 00:22:38.632 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1996001 ']' 00:22:38.632 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1996001 00:22:38.632 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:38.632 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:38.632 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1996001 00:22:38.632 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:38.632 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:38.632 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1996001' 00:22:38.632 killing process with pid 1996001 00:22:38.632 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1996001 00:22:38.632 Received shutdown signal, test time was about 10.000000 seconds 00:22:38.632 00:22:38.632 Latency(us) 00:22:38.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.632 =================================================================================================================== 00:22:38.632 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:38.632 [2024-07-13 08:09:30.295169] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:38.632 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1996001 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1996017 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1996017 /var/tmp/bdevperf.sock 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1996017 ']' 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:38.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:38.890 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:38.890 [2024-07-13 08:09:30.561574] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:38.890 [2024-07-13 08:09:30.561665] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1996017 ] 00:22:38.890 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.890 [2024-07-13 08:09:30.619634] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.148 [2024-07-13 08:09:30.701757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:39.148 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:39.148 08:09:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:39.148 08:09:30 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:39.406 [2024-07-13 08:09:31.041724] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:39.406 [2024-07-13 08:09:31.043664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11f6e60 (9): Bad file descriptor 00:22:39.406 [2024-07-13 08:09:31.044661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:22:39.406 [2024-07-13 08:09:31.044682] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:39.406 [2024-07-13 08:09:31.044709] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:39.406 request: 00:22:39.406 { 00:22:39.406 "name": "TLSTEST", 00:22:39.406 "trtype": "tcp", 00:22:39.406 "traddr": "10.0.0.2", 00:22:39.406 "adrfam": "ipv4", 00:22:39.406 "trsvcid": "4420", 00:22:39.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:39.406 "prchk_reftag": false, 00:22:39.406 "prchk_guard": false, 00:22:39.406 "hdgst": false, 00:22:39.406 "ddgst": false, 00:22:39.406 "method": "bdev_nvme_attach_controller", 00:22:39.406 "req_id": 1 00:22:39.406 } 00:22:39.406 Got JSON-RPC error response 00:22:39.406 response: 00:22:39.406 { 00:22:39.406 "code": -5, 00:22:39.406 "message": "Input/output error" 00:22:39.406 } 00:22:39.406 08:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1996017 00:22:39.406 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1996017 ']' 00:22:39.406 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1996017 00:22:39.406 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:39.406 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:39.406 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1996017 00:22:39.406 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:39.406 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:39.406 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1996017' 00:22:39.406 killing process with pid 1996017 00:22:39.406 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1996017 00:22:39.406 Received shutdown signal, test time was about 10.000000 seconds 00:22:39.406 00:22:39.406 Latency(us) 00:22:39.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.406 =================================================================================================================== 00:22:39.406 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:39.406 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1996017 00:22:39.664 08:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:39.665 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:39.665 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:39.665 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:39.665 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:39.665 08:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1992105 00:22:39.665 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1992105 ']' 00:22:39.665 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1992105 00:22:39.665 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:39.665 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:39.665 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1992105 00:22:39.665 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:39.665 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:39.665 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1992105' 00:22:39.665 killing process with pid 1992105 00:22:39.665 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1992105 00:22:39.665 [2024-07-13 08:09:31.305434] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:39.665 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1992105 00:22:39.923 08:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:39.923 08:09:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:39.923 08:09:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:39.923 08:09:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:39.923 08:09:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:39.923 08:09:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:22:39.923 08:09:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:39.923 08:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:39.923 08:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:22:39.923 08:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.Hwd1pvVHUG 00:22:39.923 08:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:39.923 08:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.Hwd1pvVHUG 00:22:39.923 08:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:22:39.923 08:09:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:39.924 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:39.924 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.924 08:09:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1996169 00:22:39.924 08:09:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:39.924 08:09:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1996169 00:22:39.924 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1996169 ']' 00:22:39.924 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.924 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:39.924 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.924 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:39.924 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.924 [2024-07-13 08:09:31.639014] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:39.924 [2024-07-13 08:09:31.639109] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:40.183 EAL: No free 2048 kB hugepages reported on node 1 00:22:40.183 [2024-07-13 08:09:31.709195] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.183 [2024-07-13 08:09:31.795783] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.183 [2024-07-13 08:09:31.795848] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.183 [2024-07-13 08:09:31.795884] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.183 [2024-07-13 08:09:31.795900] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.183 [2024-07-13 08:09:31.795912] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.183 [2024-07-13 08:09:31.795943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.183 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:40.183 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:40.183 08:09:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:40.183 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:40.183 08:09:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:40.439 08:09:31 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.439 08:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.Hwd1pvVHUG 00:22:40.439 08:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Hwd1pvVHUG 00:22:40.439 08:09:31 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:40.439 [2024-07-13 08:09:32.168602] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.697 08:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:40.955 08:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:40.955 [2024-07-13 08:09:32.669975] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:40.955 [2024-07-13 08:09:32.670241] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.955 08:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:41.533 malloc0 00:22:41.533 08:09:32 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:41.533 08:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Hwd1pvVHUG 00:22:41.797 [2024-07-13 08:09:33.471172] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:41.797 08:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hwd1pvVHUG 00:22:41.797 08:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:41.797 08:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:41.797 08:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:41.797 08:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Hwd1pvVHUG' 00:22:41.797 08:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:41.797 08:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1996451 00:22:41.797 08:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:41.797 08:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:41.797 08:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1996451 /var/tmp/bdevperf.sock 00:22:41.797 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1996451 ']' 00:22:41.797 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:41.797 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:41.797 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:41.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:41.797 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:41.797 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:42.056 [2024-07-13 08:09:33.532522] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:42.056 [2024-07-13 08:09:33.532599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1996451 ] 00:22:42.056 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.056 [2024-07-13 08:09:33.589081] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.056 [2024-07-13 08:09:33.672431] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:42.056 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:42.056 08:09:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:42.056 08:09:33 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Hwd1pvVHUG 00:22:42.621 [2024-07-13 08:09:34.064920] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:42.621 [2024-07-13 08:09:34.065033] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:42.621 TLSTESTn1 00:22:42.621 08:09:34 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:42.621 Running I/O for 10 seconds... 00:22:52.620 00:22:52.620 Latency(us) 00:22:52.620 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.620 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:52.620 Verification LBA range: start 0x0 length 0x2000 00:22:52.620 TLSTESTn1 : 10.04 3052.72 11.92 0.00 0.00 41824.58 6189.51 74177.04 00:22:52.620 =================================================================================================================== 00:22:52.620 Total : 3052.72 11.92 0.00 0.00 41824.58 6189.51 74177.04 00:22:52.620 0 00:22:52.620 08:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:52.620 08:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1996451 00:22:52.620 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1996451 ']' 00:22:52.620 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1996451 00:22:52.620 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:52.620 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:52.620 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1996451 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1996451' 00:22:52.877 killing process with pid 1996451 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1996451 00:22:52.877 Received shutdown signal, test time was about 10.000000 seconds 00:22:52.877 00:22:52.877 Latency(us) 00:22:52.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:52.877 =================================================================================================================== 00:22:52.877 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:52.877 [2024-07-13 08:09:44.365895] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1996451 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.Hwd1pvVHUG 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hwd1pvVHUG 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hwd1pvVHUG 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Hwd1pvVHUG 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Hwd1pvVHUG' 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1997762 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1997762 /var/tmp/bdevperf.sock 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1997762 ']' 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:52.877 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.135 [2024-07-13 08:09:44.626432] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:53.135 [2024-07-13 08:09:44.626509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1997762 ] 00:22:53.135 EAL: No free 2048 kB hugepages reported on node 1 00:22:53.135 [2024-07-13 08:09:44.684306] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:53.135 [2024-07-13 08:09:44.767327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.392 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:53.392 08:09:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:53.392 08:09:44 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Hwd1pvVHUG 00:22:53.650 [2024-07-13 08:09:45.145217] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.650 [2024-07-13 08:09:45.145296] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:53.650 [2024-07-13 08:09:45.145321] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.Hwd1pvVHUG 00:22:53.650 request: 00:22:53.650 { 00:22:53.650 "name": "TLSTEST", 00:22:53.650 "trtype": "tcp", 00:22:53.650 "traddr": "10.0.0.2", 00:22:53.650 "adrfam": "ipv4", 00:22:53.651 "trsvcid": "4420", 00:22:53.651 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.651 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.651 "prchk_reftag": false, 00:22:53.651 "prchk_guard": false, 00:22:53.651 "hdgst": false, 00:22:53.651 "ddgst": false, 00:22:53.651 "psk": "/tmp/tmp.Hwd1pvVHUG", 00:22:53.651 "method": "bdev_nvme_attach_controller", 00:22:53.651 "req_id": 1 00:22:53.651 } 00:22:53.651 Got JSON-RPC error response 00:22:53.651 response: 00:22:53.651 { 00:22:53.651 "code": -1, 00:22:53.651 "message": "Operation not permitted" 00:22:53.651 } 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1997762 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1997762 ']' 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1997762 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1997762 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1997762' 00:22:53.651 killing process with pid 1997762 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1997762 00:22:53.651 Received shutdown signal, test time was about 10.000000 seconds 00:22:53.651 00:22:53.651 Latency(us) 00:22:53.651 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:53.651 =================================================================================================================== 00:22:53.651 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1997762 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1996169 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1996169 ']' 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1996169 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:53.651 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1996169 00:22:53.909 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:53.909 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:53.909 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1996169' 00:22:53.909 killing process with pid 1996169 00:22:53.909 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1996169 00:22:53.909 [2024-07-13 08:09:45.401857] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:53.909 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1996169 00:22:53.909 08:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:22:53.909 08:09:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:53.909 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:53.909 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:53.909 08:09:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1997833 00:22:53.909 08:09:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:53.909 08:09:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1997833 00:22:53.909 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1997833 ']' 00:22:53.909 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:53.909 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:53.909 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:53.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:53.909 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:53.909 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.166 [2024-07-13 08:09:45.680655] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:54.166 [2024-07-13 08:09:45.680753] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.166 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.166 [2024-07-13 08:09:45.745810] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.166 [2024-07-13 08:09:45.829958] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.166 [2024-07-13 08:09:45.830029] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.166 [2024-07-13 08:09:45.830060] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.166 [2024-07-13 08:09:45.830071] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.166 [2024-07-13 08:09:45.830081] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.166 [2024-07-13 08:09:45.830106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.424 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:54.424 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:54.424 08:09:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:54.424 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:54.424 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:54.424 08:09:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:54.424 08:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.Hwd1pvVHUG 00:22:54.424 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:22:54.424 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Hwd1pvVHUG 00:22:54.424 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:22:54.424 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:54.424 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:22:54.424 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:54.424 08:09:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.Hwd1pvVHUG 00:22:54.424 08:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Hwd1pvVHUG 00:22:54.424 08:09:45 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:54.681 [2024-07-13 08:09:46.191499] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:54.681 08:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:54.939 08:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:55.196 [2024-07-13 08:09:46.724922] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:55.196 [2024-07-13 08:09:46.725159] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.196 08:09:46 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:55.454 malloc0 00:22:55.454 08:09:47 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:55.711 08:09:47 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Hwd1pvVHUG 00:22:55.969 [2024-07-13 08:09:47.591251] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:22:55.969 [2024-07-13 08:09:47.591300] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:22:55.969 [2024-07-13 08:09:47.591339] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:55.969 request: 00:22:55.969 { 00:22:55.969 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:55.969 "host": "nqn.2016-06.io.spdk:host1", 00:22:55.969 "psk": "/tmp/tmp.Hwd1pvVHUG", 00:22:55.969 "method": "nvmf_subsystem_add_host", 00:22:55.969 "req_id": 1 00:22:55.969 } 00:22:55.969 Got JSON-RPC error response 00:22:55.969 response: 00:22:55.969 { 00:22:55.969 "code": -32603, 00:22:55.969 "message": "Internal error" 00:22:55.969 } 00:22:55.969 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:22:55.969 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:55.969 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:55.969 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:55.969 08:09:47 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1997833 00:22:55.969 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1997833 ']' 00:22:55.969 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1997833 00:22:55.969 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:55.969 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:55.969 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1997833 00:22:55.969 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:55.969 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:55.969 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1997833' 00:22:55.969 killing process with pid 1997833 00:22:55.969 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1997833 00:22:55.969 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1997833 00:22:56.228 08:09:47 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.Hwd1pvVHUG 00:22:56.228 08:09:47 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:22:56.228 08:09:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:56.228 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:56.228 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.228 08:09:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1998121 00:22:56.228 08:09:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:56.228 08:09:47 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1998121 00:22:56.228 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1998121 ']' 00:22:56.228 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.228 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:56.228 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.228 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:56.228 08:09:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.228 [2024-07-13 08:09:47.949840] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:56.228 [2024-07-13 08:09:47.949972] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.486 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.486 [2024-07-13 08:09:48.021036] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:56.486 [2024-07-13 08:09:48.110657] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.486 [2024-07-13 08:09:48.110721] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.486 [2024-07-13 08:09:48.110738] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:56.486 [2024-07-13 08:09:48.110753] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:56.486 [2024-07-13 08:09:48.110765] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.486 [2024-07-13 08:09:48.110797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.743 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:56.743 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:56.743 08:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:56.743 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:56.743 08:09:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:56.743 08:09:48 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.743 08:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.Hwd1pvVHUG 00:22:56.743 08:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Hwd1pvVHUG 00:22:56.743 08:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:56.743 [2024-07-13 08:09:48.475042] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:57.001 08:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:57.257 08:09:48 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:57.514 [2024-07-13 08:09:49.016516] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:57.514 [2024-07-13 08:09:49.016764] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:57.514 08:09:49 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:57.771 malloc0 00:22:57.771 08:09:49 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:58.029 08:09:49 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Hwd1pvVHUG 00:22:58.029 [2024-07-13 08:09:49.753699] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:58.287 08:09:49 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1998369 00:22:58.287 08:09:49 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:58.287 08:09:49 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:58.287 08:09:49 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1998369 /var/tmp/bdevperf.sock 00:22:58.287 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1998369 ']' 00:22:58.287 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.287 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:58.287 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.287 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:58.287 08:09:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.287 [2024-07-13 08:09:49.815698] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:22:58.287 [2024-07-13 08:09:49.815785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1998369 ] 00:22:58.287 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.287 [2024-07-13 08:09:49.873377] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.287 [2024-07-13 08:09:49.955926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.544 08:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:58.544 08:09:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:58.544 08:09:50 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Hwd1pvVHUG 00:22:58.801 [2024-07-13 08:09:50.304805] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:58.801 [2024-07-13 08:09:50.304946] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:58.801 TLSTESTn1 00:22:58.801 08:09:50 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:59.059 08:09:50 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:22:59.059 "subsystems": [ 00:22:59.059 { 00:22:59.059 "subsystem": "keyring", 00:22:59.059 "config": [] 00:22:59.059 }, 00:22:59.059 { 00:22:59.059 "subsystem": "iobuf", 00:22:59.059 "config": [ 00:22:59.059 { 00:22:59.059 "method": "iobuf_set_options", 00:22:59.059 "params": { 00:22:59.059 "small_pool_count": 8192, 00:22:59.059 "large_pool_count": 1024, 00:22:59.059 "small_bufsize": 8192, 00:22:59.059 "large_bufsize": 135168 00:22:59.059 } 00:22:59.059 } 00:22:59.059 ] 00:22:59.059 }, 00:22:59.059 { 00:22:59.059 "subsystem": "sock", 00:22:59.059 "config": [ 00:22:59.059 { 00:22:59.059 "method": "sock_set_default_impl", 00:22:59.059 "params": { 00:22:59.059 "impl_name": "posix" 00:22:59.059 } 00:22:59.059 }, 00:22:59.059 { 00:22:59.059 "method": "sock_impl_set_options", 00:22:59.059 "params": { 00:22:59.059 "impl_name": "ssl", 00:22:59.059 "recv_buf_size": 4096, 00:22:59.059 "send_buf_size": 4096, 00:22:59.059 "enable_recv_pipe": true, 00:22:59.059 "enable_quickack": false, 00:22:59.059 "enable_placement_id": 0, 00:22:59.059 "enable_zerocopy_send_server": true, 00:22:59.059 "enable_zerocopy_send_client": false, 00:22:59.059 "zerocopy_threshold": 0, 00:22:59.059 "tls_version": 0, 00:22:59.059 "enable_ktls": false 00:22:59.059 } 00:22:59.059 }, 00:22:59.059 { 00:22:59.059 "method": "sock_impl_set_options", 00:22:59.059 "params": { 00:22:59.059 "impl_name": "posix", 00:22:59.059 "recv_buf_size": 2097152, 00:22:59.059 "send_buf_size": 2097152, 00:22:59.059 "enable_recv_pipe": true, 00:22:59.059 "enable_quickack": false, 00:22:59.059 "enable_placement_id": 0, 00:22:59.059 "enable_zerocopy_send_server": true, 00:22:59.059 "enable_zerocopy_send_client": false, 00:22:59.059 "zerocopy_threshold": 0, 00:22:59.059 "tls_version": 0, 00:22:59.059 "enable_ktls": false 00:22:59.059 } 00:22:59.059 } 00:22:59.059 ] 00:22:59.059 }, 00:22:59.059 { 00:22:59.059 "subsystem": "vmd", 00:22:59.059 "config": [] 00:22:59.059 }, 00:22:59.059 { 00:22:59.059 "subsystem": "accel", 00:22:59.059 "config": [ 00:22:59.059 { 00:22:59.059 "method": "accel_set_options", 00:22:59.059 "params": { 00:22:59.059 "small_cache_size": 128, 00:22:59.059 "large_cache_size": 16, 00:22:59.059 "task_count": 2048, 00:22:59.059 "sequence_count": 2048, 00:22:59.059 "buf_count": 2048 00:22:59.059 } 00:22:59.059 } 00:22:59.059 ] 00:22:59.059 }, 00:22:59.059 { 00:22:59.059 "subsystem": "bdev", 00:22:59.059 "config": [ 00:22:59.059 { 00:22:59.059 "method": "bdev_set_options", 00:22:59.059 "params": { 00:22:59.059 "bdev_io_pool_size": 65535, 00:22:59.059 "bdev_io_cache_size": 256, 00:22:59.059 "bdev_auto_examine": true, 00:22:59.059 "iobuf_small_cache_size": 128, 00:22:59.059 "iobuf_large_cache_size": 16 00:22:59.059 } 00:22:59.059 }, 00:22:59.059 { 00:22:59.059 "method": "bdev_raid_set_options", 00:22:59.059 "params": { 00:22:59.059 "process_window_size_kb": 1024 00:22:59.059 } 00:22:59.059 }, 00:22:59.059 { 00:22:59.059 "method": "bdev_iscsi_set_options", 00:22:59.059 "params": { 00:22:59.059 "timeout_sec": 30 00:22:59.059 } 00:22:59.059 }, 00:22:59.059 { 00:22:59.059 "method": "bdev_nvme_set_options", 00:22:59.059 "params": { 00:22:59.059 "action_on_timeout": "none", 00:22:59.059 "timeout_us": 0, 00:22:59.059 "timeout_admin_us": 0, 00:22:59.059 "keep_alive_timeout_ms": 10000, 00:22:59.059 "arbitration_burst": 0, 00:22:59.059 "low_priority_weight": 0, 00:22:59.059 "medium_priority_weight": 0, 00:22:59.059 "high_priority_weight": 0, 00:22:59.059 "nvme_adminq_poll_period_us": 10000, 00:22:59.059 "nvme_ioq_poll_period_us": 0, 00:22:59.059 "io_queue_requests": 0, 00:22:59.059 "delay_cmd_submit": true, 00:22:59.059 "transport_retry_count": 4, 00:22:59.059 "bdev_retry_count": 3, 00:22:59.059 "transport_ack_timeout": 0, 00:22:59.059 "ctrlr_loss_timeout_sec": 0, 00:22:59.059 "reconnect_delay_sec": 0, 00:22:59.059 "fast_io_fail_timeout_sec": 0, 00:22:59.059 "disable_auto_failback": false, 00:22:59.059 "generate_uuids": false, 00:22:59.059 "transport_tos": 0, 00:22:59.059 "nvme_error_stat": false, 00:22:59.059 "rdma_srq_size": 0, 00:22:59.059 "io_path_stat": false, 00:22:59.059 "allow_accel_sequence": false, 00:22:59.059 "rdma_max_cq_size": 0, 00:22:59.059 "rdma_cm_event_timeout_ms": 0, 00:22:59.059 "dhchap_digests": [ 00:22:59.059 "sha256", 00:22:59.059 "sha384", 00:22:59.059 "sha512" 00:22:59.059 ], 00:22:59.059 "dhchap_dhgroups": [ 00:22:59.059 "null", 00:22:59.059 "ffdhe2048", 00:22:59.059 "ffdhe3072", 00:22:59.059 "ffdhe4096", 00:22:59.059 "ffdhe6144", 00:22:59.059 "ffdhe8192" 00:22:59.059 ] 00:22:59.059 } 00:22:59.059 }, 00:22:59.059 { 00:22:59.059 "method": "bdev_nvme_set_hotplug", 00:22:59.059 "params": { 00:22:59.059 "period_us": 100000, 00:22:59.059 "enable": false 00:22:59.059 } 00:22:59.059 }, 00:22:59.059 { 00:22:59.059 "method": "bdev_malloc_create", 00:22:59.059 "params": { 00:22:59.059 "name": "malloc0", 00:22:59.059 "num_blocks": 8192, 00:22:59.059 "block_size": 4096, 00:22:59.059 "physical_block_size": 4096, 00:22:59.059 "uuid": "52e19430-8bee-4d23-82a0-e0b82e523eca", 00:22:59.059 "optimal_io_boundary": 0 00:22:59.059 } 00:22:59.059 }, 00:22:59.059 { 00:22:59.059 "method": "bdev_wait_for_examine" 00:22:59.059 } 00:22:59.059 ] 00:22:59.059 }, 00:22:59.059 { 00:22:59.059 "subsystem": "nbd", 00:22:59.059 "config": [] 00:22:59.059 }, 00:22:59.059 { 00:22:59.059 "subsystem": "scheduler", 00:22:59.059 "config": [ 00:22:59.059 { 00:22:59.059 "method": "framework_set_scheduler", 00:22:59.059 "params": { 00:22:59.059 "name": "static" 00:22:59.059 } 00:22:59.059 } 00:22:59.059 ] 00:22:59.059 }, 00:22:59.059 { 00:22:59.059 "subsystem": "nvmf", 00:22:59.059 "config": [ 00:22:59.059 { 00:22:59.059 "method": "nvmf_set_config", 00:22:59.059 "params": { 00:22:59.059 "discovery_filter": "match_any", 00:22:59.059 "admin_cmd_passthru": { 00:22:59.059 "identify_ctrlr": false 00:22:59.059 } 00:22:59.059 } 00:22:59.059 }, 00:22:59.059 { 00:22:59.059 "method": "nvmf_set_max_subsystems", 00:22:59.059 "params": { 00:22:59.059 "max_subsystems": 1024 00:22:59.059 } 00:22:59.059 }, 00:22:59.059 { 00:22:59.059 "method": "nvmf_set_crdt", 00:22:59.059 "params": { 00:22:59.059 "crdt1": 0, 00:22:59.059 "crdt2": 0, 00:22:59.059 "crdt3": 0 00:22:59.059 } 00:22:59.059 }, 00:22:59.059 { 00:22:59.059 "method": "nvmf_create_transport", 00:22:59.059 "params": { 00:22:59.059 "trtype": "TCP", 00:22:59.059 "max_queue_depth": 128, 00:22:59.059 "max_io_qpairs_per_ctrlr": 127, 00:22:59.059 "in_capsule_data_size": 4096, 00:22:59.060 "max_io_size": 131072, 00:22:59.060 "io_unit_size": 131072, 00:22:59.060 "max_aq_depth": 128, 00:22:59.060 "num_shared_buffers": 511, 00:22:59.060 "buf_cache_size": 4294967295, 00:22:59.060 "dif_insert_or_strip": false, 00:22:59.060 "zcopy": false, 00:22:59.060 "c2h_success": false, 00:22:59.060 "sock_priority": 0, 00:22:59.060 "abort_timeout_sec": 1, 00:22:59.060 "ack_timeout": 0, 00:22:59.060 "data_wr_pool_size": 0 00:22:59.060 } 00:22:59.060 }, 00:22:59.060 { 00:22:59.060 "method": "nvmf_create_subsystem", 00:22:59.060 "params": { 00:22:59.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.060 "allow_any_host": false, 00:22:59.060 "serial_number": "SPDK00000000000001", 00:22:59.060 "model_number": "SPDK bdev Controller", 00:22:59.060 "max_namespaces": 10, 00:22:59.060 "min_cntlid": 1, 00:22:59.060 "max_cntlid": 65519, 00:22:59.060 "ana_reporting": false 00:22:59.060 } 00:22:59.060 }, 00:22:59.060 { 00:22:59.060 "method": "nvmf_subsystem_add_host", 00:22:59.060 "params": { 00:22:59.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.060 "host": "nqn.2016-06.io.spdk:host1", 00:22:59.060 "psk": "/tmp/tmp.Hwd1pvVHUG" 00:22:59.060 } 00:22:59.060 }, 00:22:59.060 { 00:22:59.060 "method": "nvmf_subsystem_add_ns", 00:22:59.060 "params": { 00:22:59.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.060 "namespace": { 00:22:59.060 "nsid": 1, 00:22:59.060 "bdev_name": "malloc0", 00:22:59.060 "nguid": "52E194308BEE4D2382A0E0B82E523ECA", 00:22:59.060 "uuid": "52e19430-8bee-4d23-82a0-e0b82e523eca", 00:22:59.060 "no_auto_visible": false 00:22:59.060 } 00:22:59.060 } 00:22:59.060 }, 00:22:59.060 { 00:22:59.060 "method": "nvmf_subsystem_add_listener", 00:22:59.060 "params": { 00:22:59.060 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.060 "listen_address": { 00:22:59.060 "trtype": "TCP", 00:22:59.060 "adrfam": "IPv4", 00:22:59.060 "traddr": "10.0.0.2", 00:22:59.060 "trsvcid": "4420" 00:22:59.060 }, 00:22:59.060 "secure_channel": true 00:22:59.060 } 00:22:59.060 } 00:22:59.060 ] 00:22:59.060 } 00:22:59.060 ] 00:22:59.060 }' 00:22:59.060 08:09:50 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:59.318 08:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:22:59.318 "subsystems": [ 00:22:59.318 { 00:22:59.318 "subsystem": "keyring", 00:22:59.318 "config": [] 00:22:59.318 }, 00:22:59.318 { 00:22:59.318 "subsystem": "iobuf", 00:22:59.318 "config": [ 00:22:59.318 { 00:22:59.318 "method": "iobuf_set_options", 00:22:59.318 "params": { 00:22:59.318 "small_pool_count": 8192, 00:22:59.318 "large_pool_count": 1024, 00:22:59.318 "small_bufsize": 8192, 00:22:59.318 "large_bufsize": 135168 00:22:59.318 } 00:22:59.318 } 00:22:59.318 ] 00:22:59.318 }, 00:22:59.318 { 00:22:59.318 "subsystem": "sock", 00:22:59.318 "config": [ 00:22:59.318 { 00:22:59.318 "method": "sock_set_default_impl", 00:22:59.318 "params": { 00:22:59.318 "impl_name": "posix" 00:22:59.318 } 00:22:59.318 }, 00:22:59.318 { 00:22:59.318 "method": "sock_impl_set_options", 00:22:59.318 "params": { 00:22:59.318 "impl_name": "ssl", 00:22:59.318 "recv_buf_size": 4096, 00:22:59.318 "send_buf_size": 4096, 00:22:59.318 "enable_recv_pipe": true, 00:22:59.318 "enable_quickack": false, 00:22:59.318 "enable_placement_id": 0, 00:22:59.318 "enable_zerocopy_send_server": true, 00:22:59.318 "enable_zerocopy_send_client": false, 00:22:59.318 "zerocopy_threshold": 0, 00:22:59.318 "tls_version": 0, 00:22:59.318 "enable_ktls": false 00:22:59.318 } 00:22:59.318 }, 00:22:59.318 { 00:22:59.318 "method": "sock_impl_set_options", 00:22:59.318 "params": { 00:22:59.318 "impl_name": "posix", 00:22:59.318 "recv_buf_size": 2097152, 00:22:59.318 "send_buf_size": 2097152, 00:22:59.318 "enable_recv_pipe": true, 00:22:59.318 "enable_quickack": false, 00:22:59.318 "enable_placement_id": 0, 00:22:59.318 "enable_zerocopy_send_server": true, 00:22:59.318 "enable_zerocopy_send_client": false, 00:22:59.318 "zerocopy_threshold": 0, 00:22:59.318 "tls_version": 0, 00:22:59.318 "enable_ktls": false 00:22:59.318 } 00:22:59.318 } 00:22:59.318 ] 00:22:59.318 }, 00:22:59.318 { 00:22:59.318 "subsystem": "vmd", 00:22:59.318 "config": [] 00:22:59.318 }, 00:22:59.318 { 00:22:59.318 "subsystem": "accel", 00:22:59.318 "config": [ 00:22:59.318 { 00:22:59.318 "method": "accel_set_options", 00:22:59.318 "params": { 00:22:59.318 "small_cache_size": 128, 00:22:59.318 "large_cache_size": 16, 00:22:59.318 "task_count": 2048, 00:22:59.318 "sequence_count": 2048, 00:22:59.318 "buf_count": 2048 00:22:59.318 } 00:22:59.318 } 00:22:59.318 ] 00:22:59.318 }, 00:22:59.318 { 00:22:59.318 "subsystem": "bdev", 00:22:59.318 "config": [ 00:22:59.318 { 00:22:59.318 "method": "bdev_set_options", 00:22:59.318 "params": { 00:22:59.318 "bdev_io_pool_size": 65535, 00:22:59.318 "bdev_io_cache_size": 256, 00:22:59.318 "bdev_auto_examine": true, 00:22:59.318 "iobuf_small_cache_size": 128, 00:22:59.318 "iobuf_large_cache_size": 16 00:22:59.318 } 00:22:59.318 }, 00:22:59.318 { 00:22:59.318 "method": "bdev_raid_set_options", 00:22:59.318 "params": { 00:22:59.318 "process_window_size_kb": 1024 00:22:59.318 } 00:22:59.318 }, 00:22:59.318 { 00:22:59.318 "method": "bdev_iscsi_set_options", 00:22:59.318 "params": { 00:22:59.318 "timeout_sec": 30 00:22:59.318 } 00:22:59.318 }, 00:22:59.318 { 00:22:59.318 "method": "bdev_nvme_set_options", 00:22:59.318 "params": { 00:22:59.318 "action_on_timeout": "none", 00:22:59.318 "timeout_us": 0, 00:22:59.318 "timeout_admin_us": 0, 00:22:59.318 "keep_alive_timeout_ms": 10000, 00:22:59.318 "arbitration_burst": 0, 00:22:59.318 "low_priority_weight": 0, 00:22:59.318 "medium_priority_weight": 0, 00:22:59.318 "high_priority_weight": 0, 00:22:59.318 "nvme_adminq_poll_period_us": 10000, 00:22:59.318 "nvme_ioq_poll_period_us": 0, 00:22:59.318 "io_queue_requests": 512, 00:22:59.318 "delay_cmd_submit": true, 00:22:59.318 "transport_retry_count": 4, 00:22:59.318 "bdev_retry_count": 3, 00:22:59.318 "transport_ack_timeout": 0, 00:22:59.318 "ctrlr_loss_timeout_sec": 0, 00:22:59.318 "reconnect_delay_sec": 0, 00:22:59.318 "fast_io_fail_timeout_sec": 0, 00:22:59.318 "disable_auto_failback": false, 00:22:59.318 "generate_uuids": false, 00:22:59.318 "transport_tos": 0, 00:22:59.318 "nvme_error_stat": false, 00:22:59.318 "rdma_srq_size": 0, 00:22:59.318 "io_path_stat": false, 00:22:59.318 "allow_accel_sequence": false, 00:22:59.318 "rdma_max_cq_size": 0, 00:22:59.318 "rdma_cm_event_timeout_ms": 0, 00:22:59.318 "dhchap_digests": [ 00:22:59.318 "sha256", 00:22:59.318 "sha384", 00:22:59.318 "sha512" 00:22:59.318 ], 00:22:59.318 "dhchap_dhgroups": [ 00:22:59.318 "null", 00:22:59.318 "ffdhe2048", 00:22:59.318 "ffdhe3072", 00:22:59.318 "ffdhe4096", 00:22:59.318 "ffdhe6144", 00:22:59.318 "ffdhe8192" 00:22:59.318 ] 00:22:59.318 } 00:22:59.318 }, 00:22:59.318 { 00:22:59.318 "method": "bdev_nvme_attach_controller", 00:22:59.318 "params": { 00:22:59.318 "name": "TLSTEST", 00:22:59.318 "trtype": "TCP", 00:22:59.318 "adrfam": "IPv4", 00:22:59.318 "traddr": "10.0.0.2", 00:22:59.318 "trsvcid": "4420", 00:22:59.318 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.318 "prchk_reftag": false, 00:22:59.318 "prchk_guard": false, 00:22:59.318 "ctrlr_loss_timeout_sec": 0, 00:22:59.318 "reconnect_delay_sec": 0, 00:22:59.318 "fast_io_fail_timeout_sec": 0, 00:22:59.318 "psk": "/tmp/tmp.Hwd1pvVHUG", 00:22:59.318 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:59.318 "hdgst": false, 00:22:59.318 "ddgst": false 00:22:59.318 } 00:22:59.318 }, 00:22:59.318 { 00:22:59.318 "method": "bdev_nvme_set_hotplug", 00:22:59.318 "params": { 00:22:59.318 "period_us": 100000, 00:22:59.318 "enable": false 00:22:59.318 } 00:22:59.318 }, 00:22:59.318 { 00:22:59.318 "method": "bdev_wait_for_examine" 00:22:59.318 } 00:22:59.318 ] 00:22:59.318 }, 00:22:59.318 { 00:22:59.318 "subsystem": "nbd", 00:22:59.318 "config": [] 00:22:59.318 } 00:22:59.318 ] 00:22:59.318 }' 00:22:59.318 08:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1998369 00:22:59.318 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1998369 ']' 00:22:59.318 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1998369 00:22:59.318 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:59.318 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:59.318 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1998369 00:22:59.318 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:59.318 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:59.318 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1998369' 00:22:59.318 killing process with pid 1998369 00:22:59.318 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1998369 00:22:59.318 Received shutdown signal, test time was about 10.000000 seconds 00:22:59.318 00:22:59.318 Latency(us) 00:22:59.318 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.318 =================================================================================================================== 00:22:59.319 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:59.319 [2024-07-13 08:09:51.040189] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:59.319 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1998369 00:22:59.576 08:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1998121 00:22:59.576 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1998121 ']' 00:22:59.576 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1998121 00:22:59.576 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:59.576 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:59.576 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1998121 00:22:59.576 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:59.576 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:59.576 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1998121' 00:22:59.576 killing process with pid 1998121 00:22:59.576 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1998121 00:22:59.576 [2024-07-13 08:09:51.273647] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:59.576 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1998121 00:22:59.834 08:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:59.834 08:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:59.834 08:09:51 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:22:59.834 "subsystems": [ 00:22:59.834 { 00:22:59.834 "subsystem": "keyring", 00:22:59.834 "config": [] 00:22:59.834 }, 00:22:59.834 { 00:22:59.834 "subsystem": "iobuf", 00:22:59.834 "config": [ 00:22:59.834 { 00:22:59.834 "method": "iobuf_set_options", 00:22:59.834 "params": { 00:22:59.834 "small_pool_count": 8192, 00:22:59.834 "large_pool_count": 1024, 00:22:59.834 "small_bufsize": 8192, 00:22:59.834 "large_bufsize": 135168 00:22:59.834 } 00:22:59.834 } 00:22:59.834 ] 00:22:59.834 }, 00:22:59.834 { 00:22:59.834 "subsystem": "sock", 00:22:59.834 "config": [ 00:22:59.834 { 00:22:59.834 "method": "sock_set_default_impl", 00:22:59.834 "params": { 00:22:59.834 "impl_name": "posix" 00:22:59.834 } 00:22:59.834 }, 00:22:59.834 { 00:22:59.834 "method": "sock_impl_set_options", 00:22:59.834 "params": { 00:22:59.834 "impl_name": "ssl", 00:22:59.834 "recv_buf_size": 4096, 00:22:59.834 "send_buf_size": 4096, 00:22:59.834 "enable_recv_pipe": true, 00:22:59.834 "enable_quickack": false, 00:22:59.834 "enable_placement_id": 0, 00:22:59.834 "enable_zerocopy_send_server": true, 00:22:59.834 "enable_zerocopy_send_client": false, 00:22:59.834 "zerocopy_threshold": 0, 00:22:59.834 "tls_version": 0, 00:22:59.834 "enable_ktls": false 00:22:59.834 } 00:22:59.834 }, 00:22:59.834 { 00:22:59.834 "method": "sock_impl_set_options", 00:22:59.834 "params": { 00:22:59.834 "impl_name": "posix", 00:22:59.834 "recv_buf_size": 2097152, 00:22:59.834 "send_buf_size": 2097152, 00:22:59.834 "enable_recv_pipe": true, 00:22:59.834 "enable_quickack": false, 00:22:59.834 "enable_placement_id": 0, 00:22:59.834 "enable_zerocopy_send_server": true, 00:22:59.834 "enable_zerocopy_send_client": false, 00:22:59.834 "zerocopy_threshold": 0, 00:22:59.834 "tls_version": 0, 00:22:59.834 "enable_ktls": false 00:22:59.834 } 00:22:59.834 } 00:22:59.834 ] 00:22:59.834 }, 00:22:59.834 { 00:22:59.834 "subsystem": "vmd", 00:22:59.834 "config": [] 00:22:59.834 }, 00:22:59.834 { 00:22:59.834 "subsystem": "accel", 00:22:59.834 "config": [ 00:22:59.834 { 00:22:59.834 "method": "accel_set_options", 00:22:59.834 "params": { 00:22:59.834 "small_cache_size": 128, 00:22:59.834 "large_cache_size": 16, 00:22:59.834 "task_count": 2048, 00:22:59.834 "sequence_count": 2048, 00:22:59.834 "buf_count": 2048 00:22:59.834 } 00:22:59.834 } 00:22:59.834 ] 00:22:59.834 }, 00:22:59.834 { 00:22:59.834 "subsystem": "bdev", 00:22:59.834 "config": [ 00:22:59.834 { 00:22:59.834 "method": "bdev_set_options", 00:22:59.834 "params": { 00:22:59.834 "bdev_io_pool_size": 65535, 00:22:59.834 "bdev_io_cache_size": 256, 00:22:59.834 "bdev_auto_examine": true, 00:22:59.834 "iobuf_small_cache_size": 128, 00:22:59.834 "iobuf_large_cache_size": 16 00:22:59.834 } 00:22:59.834 }, 00:22:59.834 { 00:22:59.834 "method": "bdev_raid_set_options", 00:22:59.834 "params": { 00:22:59.834 "process_window_size_kb": 1024 00:22:59.834 } 00:22:59.834 }, 00:22:59.834 { 00:22:59.834 "method": "bdev_iscsi_set_options", 00:22:59.834 "params": { 00:22:59.834 "timeout_sec": 30 00:22:59.834 } 00:22:59.834 }, 00:22:59.834 { 00:22:59.834 "method": "bdev_nvme_set_options", 00:22:59.835 "params": { 00:22:59.835 "action_on_timeout": "none", 00:22:59.835 "timeout_us": 0, 00:22:59.835 "timeout_admin_us": 0, 00:22:59.835 "keep_alive_timeout_ms": 10000, 00:22:59.835 "arbitration_burst": 0, 00:22:59.835 "low_priority_weight": 0, 00:22:59.835 "medium_priority_weight": 0, 00:22:59.835 "high_priority_weight": 0, 00:22:59.835 "nvme_adminq_poll_period_us": 10000, 00:22:59.835 "nvme_ioq_poll_period_us": 0, 00:22:59.835 "io_queue_requests": 0, 00:22:59.835 "delay_cmd_submit": true, 00:22:59.835 "transport_retry_count": 4, 00:22:59.835 "bdev_retry_count": 3, 00:22:59.835 "transport_ack_timeout": 0, 00:22:59.835 "ctrlr_loss_timeout_sec": 0, 00:22:59.835 "reconnect_delay_sec": 0, 00:22:59.835 "fast_io_fail_timeout_sec": 0, 00:22:59.835 "disable_auto_failback": false, 00:22:59.835 "generate_uuids": false, 00:22:59.835 "transport_tos": 0, 00:22:59.835 "nvme_error_stat": false, 00:22:59.835 "rdma_srq_size": 0, 00:22:59.835 "io_path_stat": false, 00:22:59.835 "allow_accel_sequence": false, 00:22:59.835 "rdma_max_cq_size": 0, 00:22:59.835 "rdma_cm_event_timeout_ms": 0, 00:22:59.835 "dhchap_digests": [ 00:22:59.835 "sha256", 00:22:59.835 "sha384", 00:22:59.835 "sha512" 00:22:59.835 ], 00:22:59.835 "dhchap_dhgroups": [ 00:22:59.835 "null", 00:22:59.835 "ffdhe2048", 00:22:59.835 "ffdhe3072", 00:22:59.835 "ffdhe4096", 00:22:59.835 "ffdhe6144", 00:22:59.835 "ffdhe8192" 00:22:59.835 ] 00:22:59.835 } 00:22:59.835 }, 00:22:59.835 { 00:22:59.835 "method": "bdev_nvme_set_hotplug", 00:22:59.835 "params": { 00:22:59.835 "period_us": 100000, 00:22:59.835 "enable": false 00:22:59.835 } 00:22:59.835 }, 00:22:59.835 { 00:22:59.835 "method": "bdev_malloc_create", 00:22:59.835 "params": { 00:22:59.835 "name": "malloc0", 00:22:59.835 "num_blocks": 8192, 00:22:59.835 "block_size": 4096, 00:22:59.835 "physical_block_size": 4096, 00:22:59.835 "uuid": "52e19430-8bee-4d23-82a0-e0b82e523eca", 00:22:59.835 "optimal_io_boundary": 0 00:22:59.835 } 00:22:59.835 }, 00:22:59.835 { 00:22:59.835 "method": "bdev_wait_for_examine" 00:22:59.835 } 00:22:59.835 ] 00:22:59.835 }, 00:22:59.835 { 00:22:59.835 "subsystem": "nbd", 00:22:59.835 "config": [] 00:22:59.835 }, 00:22:59.835 { 00:22:59.835 "subsystem": "scheduler", 00:22:59.835 "config": [ 00:22:59.835 { 00:22:59.835 "method": "framework_set_scheduler", 00:22:59.835 "params": { 00:22:59.835 "name": "static" 00:22:59.835 } 00:22:59.835 } 00:22:59.835 ] 00:22:59.835 }, 00:22:59.835 { 00:22:59.835 "subsystem": "nvmf", 00:22:59.835 "config": [ 00:22:59.835 { 00:22:59.835 "method": "nvmf_set_config", 00:22:59.835 "params": { 00:22:59.835 "discovery_filter": "match_any", 00:22:59.835 "admin_cmd_passthru": { 00:22:59.835 "identify_ctrlr": false 00:22:59.835 } 00:22:59.835 } 00:22:59.835 }, 00:22:59.835 { 00:22:59.835 "method": "nvmf_set_max_subsystems", 00:22:59.835 "params": { 00:22:59.835 "max_subsystems": 1024 00:22:59.835 } 00:22:59.835 }, 00:22:59.835 { 00:22:59.835 "method": "nvmf_set_crdt", 00:22:59.835 "params": { 00:22:59.835 "crdt1": 0, 00:22:59.835 "crdt2": 0, 00:22:59.835 "crdt3": 0 00:22:59.835 } 00:22:59.835 }, 00:22:59.835 { 00:22:59.835 "method": "nvmf_create_transport", 00:22:59.835 "params": { 00:22:59.835 "trtype": "TCP", 00:22:59.835 "max_queue_depth": 128, 00:22:59.835 "max_io_qpairs_per_ctrlr": 127, 00:22:59.835 "in_capsule_data_size": 4096, 00:22:59.835 "max_io_size": 131072, 00:22:59.835 "io_unit_size": 131072, 00:22:59.835 "max_aq_depth": 128, 00:22:59.835 "num_shared_buffers": 511, 00:22:59.835 "buf_cache_size": 4294967295, 00:22:59.835 "dif_insert_or_strip": false, 00:22:59.835 "zcopy": false, 00:22:59.835 "c2h_success": false, 00:22:59.835 "sock_priority": 0, 00:22:59.835 "abort_timeout_sec": 1, 00:22:59.835 "ack_timeout": 0, 00:22:59.835 "data_wr_pool_size": 0 00:22:59.835 } 00:22:59.835 }, 00:22:59.835 { 00:22:59.835 "method": "nvmf_create_subsystem", 00:22:59.835 "params": { 00:22:59.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.835 "allow_any_host": false, 00:22:59.835 "serial_number": "SPDK00000000000001", 00:22:59.835 "model_number": "SPDK bdev Controller", 00:22:59.835 "max_namespaces": 10, 00:22:59.835 "min_cntlid": 1, 00:22:59.835 "max_cntlid": 65519, 00:22:59.835 "ana_reporting": false 00:22:59.835 } 00:22:59.835 }, 00:22:59.835 { 00:22:59.835 "method": "nvmf_subsystem_add_host", 00:22:59.835 "params": { 00:22:59.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.835 "host": "nqn.2016-06.io.spdk:host1", 00:22:59.835 "psk": "/tmp/tmp.Hwd1pvVHUG" 00:22:59.835 } 00:22:59.835 }, 00:22:59.835 { 00:22:59.835 "method": "nvmf_subsystem_add_ns", 00:22:59.835 "params": { 00:22:59.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.835 "namespace": { 00:22:59.835 "nsid": 1, 00:22:59.835 "bdev_name": "malloc0", 00:22:59.835 "nguid": "52E194308BEE4D2382A0E0B82E523ECA", 00:22:59.835 "uuid": "52e19430-8bee-4d23-82a0-e0b82e523eca", 00:22:59.835 "no_auto_visible": false 00:22:59.835 } 00:22:59.835 } 00:22:59.835 }, 00:22:59.835 { 00:22:59.835 "method": "nvmf_subsystem_add_listener", 00:22:59.835 "params": { 00:22:59.835 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.835 "listen_address": { 00:22:59.835 "trtype": "TCP", 00:22:59.835 "adrfam": "IPv4", 00:22:59.835 "traddr": "10.0.0.2", 00:22:59.835 "trsvcid": "4420" 00:22:59.835 }, 00:22:59.835 "secure_channel": true 00:22:59.835 } 00:22:59.835 } 00:22:59.835 ] 00:22:59.835 } 00:22:59.835 ] 00:22:59.835 }' 00:22:59.835 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:59.835 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.835 08:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1998638 00:22:59.835 08:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:59.835 08:09:51 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1998638 00:22:59.835 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1998638 ']' 00:22:59.835 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.835 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:59.835 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.835 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:59.835 08:09:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.094 [2024-07-13 08:09:51.571207] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:00.094 [2024-07-13 08:09:51.571328] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:00.094 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.094 [2024-07-13 08:09:51.640605] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.094 [2024-07-13 08:09:51.729254] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:00.094 [2024-07-13 08:09:51.729317] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:00.094 [2024-07-13 08:09:51.729341] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:00.094 [2024-07-13 08:09:51.729354] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:00.094 [2024-07-13 08:09:51.729365] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:00.094 [2024-07-13 08:09:51.729853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:00.351 [2024-07-13 08:09:51.967161] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.351 [2024-07-13 08:09:51.983112] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:00.351 [2024-07-13 08:09:51.999170] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:00.351 [2024-07-13 08:09:52.010071] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:00.917 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:00.917 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:00.917 08:09:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:00.917 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:00.917 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.917 08:09:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:00.917 08:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1998789 00:23:00.917 08:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1998789 /var/tmp/bdevperf.sock 00:23:00.917 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1998789 ']' 00:23:00.917 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.917 08:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:00.917 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:00.917 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.917 08:09:52 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:00.917 "subsystems": [ 00:23:00.917 { 00:23:00.917 "subsystem": "keyring", 00:23:00.917 "config": [] 00:23:00.917 }, 00:23:00.917 { 00:23:00.917 "subsystem": "iobuf", 00:23:00.917 "config": [ 00:23:00.917 { 00:23:00.917 "method": "iobuf_set_options", 00:23:00.917 "params": { 00:23:00.917 "small_pool_count": 8192, 00:23:00.917 "large_pool_count": 1024, 00:23:00.917 "small_bufsize": 8192, 00:23:00.917 "large_bufsize": 135168 00:23:00.917 } 00:23:00.917 } 00:23:00.917 ] 00:23:00.917 }, 00:23:00.917 { 00:23:00.917 "subsystem": "sock", 00:23:00.917 "config": [ 00:23:00.917 { 00:23:00.917 "method": "sock_set_default_impl", 00:23:00.917 "params": { 00:23:00.917 "impl_name": "posix" 00:23:00.917 } 00:23:00.917 }, 00:23:00.917 { 00:23:00.917 "method": "sock_impl_set_options", 00:23:00.917 "params": { 00:23:00.917 "impl_name": "ssl", 00:23:00.917 "recv_buf_size": 4096, 00:23:00.917 "send_buf_size": 4096, 00:23:00.917 "enable_recv_pipe": true, 00:23:00.917 "enable_quickack": false, 00:23:00.917 "enable_placement_id": 0, 00:23:00.917 "enable_zerocopy_send_server": true, 00:23:00.917 "enable_zerocopy_send_client": false, 00:23:00.917 "zerocopy_threshold": 0, 00:23:00.917 "tls_version": 0, 00:23:00.917 "enable_ktls": false 00:23:00.917 } 00:23:00.917 }, 00:23:00.917 { 00:23:00.917 "method": "sock_impl_set_options", 00:23:00.917 "params": { 00:23:00.917 "impl_name": "posix", 00:23:00.917 "recv_buf_size": 2097152, 00:23:00.917 "send_buf_size": 2097152, 00:23:00.917 "enable_recv_pipe": true, 00:23:00.917 "enable_quickack": false, 00:23:00.917 "enable_placement_id": 0, 00:23:00.917 "enable_zerocopy_send_server": true, 00:23:00.917 "enable_zerocopy_send_client": false, 00:23:00.917 "zerocopy_threshold": 0, 00:23:00.917 "tls_version": 0, 00:23:00.917 "enable_ktls": false 00:23:00.917 } 00:23:00.917 } 00:23:00.917 ] 00:23:00.917 }, 00:23:00.917 { 00:23:00.917 "subsystem": "vmd", 00:23:00.917 "config": [] 00:23:00.917 }, 00:23:00.917 { 00:23:00.917 "subsystem": "accel", 00:23:00.917 "config": [ 00:23:00.917 { 00:23:00.917 "method": "accel_set_options", 00:23:00.917 "params": { 00:23:00.917 "small_cache_size": 128, 00:23:00.917 "large_cache_size": 16, 00:23:00.917 "task_count": 2048, 00:23:00.917 "sequence_count": 2048, 00:23:00.917 "buf_count": 2048 00:23:00.917 } 00:23:00.917 } 00:23:00.917 ] 00:23:00.917 }, 00:23:00.917 { 00:23:00.917 "subsystem": "bdev", 00:23:00.917 "config": [ 00:23:00.917 { 00:23:00.917 "method": "bdev_set_options", 00:23:00.917 "params": { 00:23:00.917 "bdev_io_pool_size": 65535, 00:23:00.917 "bdev_io_cache_size": 256, 00:23:00.917 "bdev_auto_examine": true, 00:23:00.917 "iobuf_small_cache_size": 128, 00:23:00.917 "iobuf_large_cache_size": 16 00:23:00.917 } 00:23:00.917 }, 00:23:00.917 { 00:23:00.917 "method": "bdev_raid_set_options", 00:23:00.917 "params": { 00:23:00.917 "process_window_size_kb": 1024 00:23:00.917 } 00:23:00.917 }, 00:23:00.917 { 00:23:00.917 "method": "bdev_iscsi_set_options", 00:23:00.917 "params": { 00:23:00.917 "timeout_sec": 30 00:23:00.917 } 00:23:00.917 }, 00:23:00.917 { 00:23:00.917 "method": "bdev_nvme_set_options", 00:23:00.917 "params": { 00:23:00.917 "action_on_timeout": "none", 00:23:00.917 "timeout_us": 0, 00:23:00.917 "timeout_admin_us": 0, 00:23:00.917 "keep_alive_timeout_ms": 10000, 00:23:00.917 "arbitration_burst": 0, 00:23:00.917 "low_priority_weight": 0, 00:23:00.917 "medium_priority_weight": 0, 00:23:00.917 "high_priority_weight": 0, 00:23:00.917 "nvme_adminq_poll_period_us": 10000, 00:23:00.917 "nvme_ioq_poll_period_us": 0, 00:23:00.917 "io_queue_requests": 512, 00:23:00.917 "delay_cmd_submit": true, 00:23:00.917 "transport_retry_count": 4, 00:23:00.917 "bdev_retry_count": 3, 00:23:00.917 "transport_ack_timeout": 0, 00:23:00.917 "ctrlr_loss_timeout_sec": 0, 00:23:00.917 "reconnect_delay_sec": 0, 00:23:00.917 "fast_io_fail_timeout_sec": 0, 00:23:00.917 "disable_auto_failback": false, 00:23:00.917 "generate_uuids": false, 00:23:00.917 "transport_tos": 0, 00:23:00.917 "nvme_error_stat": false, 00:23:00.917 "rdma_srq_size": 0, 00:23:00.917 "io_path_stat": false, 00:23:00.917 "allow_accel_sequence": false, 00:23:00.917 "rdma_max_cq_size": 0, 00:23:00.917 "rdma_cm_event_timeout_ms": 0, 00:23:00.917 "dhchap_digests": [ 00:23:00.918 "sha256", 00:23:00.918 "sha384", 00:23:00.918 "sha512" 00:23:00.918 ], 00:23:00.918 "dhchap_dhgroups": [ 00:23:00.918 "null", 00:23:00.918 "ffdhe2048", 00:23:00.918 "ffdhe3072", 00:23:00.918 "ffdhe4096", 00:23:00.918 "ffdhe6144", 00:23:00.918 "ffdhe8192" 00:23:00.918 ] 00:23:00.918 } 00:23:00.918 }, 00:23:00.918 { 00:23:00.918 "method": "bdev_nvme_attach_controller", 00:23:00.918 "params": { 00:23:00.918 "name": "TLSTEST", 00:23:00.918 "trtype": "TCP", 00:23:00.918 "adrfam": "IPv4", 00:23:00.918 "traddr": "10.0.0.2", 00:23:00.918 "trsvcid": "4420", 00:23:00.918 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.918 "prchk_reftag": false, 00:23:00.918 "prchk_guard": false, 00:23:00.918 "ctrlr_loss_timeout_sec": 0, 00:23:00.918 "reconnect_delay_sec": 0, 00:23:00.918 "fast_io_fail_timeout_sec": 0, 00:23:00.918 "psk": "/tmp/tmp.Hwd1pvVHUG", 00:23:00.918 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:00.918 "hdgst": false, 00:23:00.918 "ddgst": false 00:23:00.918 } 00:23:00.918 }, 00:23:00.918 { 00:23:00.918 "method": "bdev_nvme_set_hotplug", 00:23:00.918 "params": { 00:23:00.918 "period_us": 100000, 00:23:00.918 "enable": false 00:23:00.918 } 00:23:00.918 }, 00:23:00.918 { 00:23:00.918 "method": "bdev_wait_for_examine" 00:23:00.918 } 00:23:00.918 ] 00:23:00.918 }, 00:23:00.918 { 00:23:00.918 "subsystem": "nbd", 00:23:00.918 "config": [] 00:23:00.918 } 00:23:00.918 ] 00:23:00.918 }' 00:23:00.918 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:00.918 08:09:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.918 [2024-07-13 08:09:52.589411] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:00.918 [2024-07-13 08:09:52.589491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1998789 ] 00:23:00.918 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.175 [2024-07-13 08:09:52.654803] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.175 [2024-07-13 08:09:52.741143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.432 [2024-07-13 08:09:52.912108] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:01.432 [2024-07-13 08:09:52.912253] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:01.996 08:09:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.996 08:09:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:01.996 08:09:53 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:01.996 Running I/O for 10 seconds... 00:23:14.195 00:23:14.195 Latency(us) 00:23:14.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.195 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:14.195 Verification LBA range: start 0x0 length 0x2000 00:23:14.195 TLSTESTn1 : 10.04 3087.31 12.06 0.00 0.00 41357.47 6019.60 70293.43 00:23:14.195 =================================================================================================================== 00:23:14.195 Total : 3087.31 12.06 0.00 0.00 41357.47 6019.60 70293.43 00:23:14.195 0 00:23:14.195 08:10:03 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:14.195 08:10:03 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1998789 00:23:14.195 08:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1998789 ']' 00:23:14.195 08:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1998789 00:23:14.195 08:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:14.195 08:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:14.195 08:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1998789 00:23:14.195 08:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:14.195 08:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:14.195 08:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1998789' 00:23:14.195 killing process with pid 1998789 00:23:14.195 08:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1998789 00:23:14.195 Received shutdown signal, test time was about 10.000000 seconds 00:23:14.195 00:23:14.195 Latency(us) 00:23:14.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.195 =================================================================================================================== 00:23:14.195 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:14.195 [2024-07-13 08:10:03.809352] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:14.195 08:10:03 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1998789 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1998638 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1998638 ']' 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1998638 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1998638 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1998638' 00:23:14.195 killing process with pid 1998638 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1998638 00:23:14.195 [2024-07-13 08:10:04.061499] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1998638 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2000123 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2000123 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2000123 ']' 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.195 [2024-07-13 08:10:04.343609] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:14.195 [2024-07-13 08:10:04.343711] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.195 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.195 [2024-07-13 08:10:04.411831] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.195 [2024-07-13 08:10:04.497819] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.195 [2024-07-13 08:10:04.497892] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.195 [2024-07-13 08:10:04.497920] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.195 [2024-07-13 08:10:04.497933] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.195 [2024-07-13 08:10:04.497945] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.195 [2024-07-13 08:10:04.497988] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:14.195 08:10:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:14.196 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:14.196 08:10:04 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.196 08:10:04 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.196 08:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.Hwd1pvVHUG 00:23:14.196 08:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Hwd1pvVHUG 00:23:14.196 08:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:14.196 [2024-07-13 08:10:04.913673] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.196 08:10:04 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:14.196 08:10:05 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:14.196 [2024-07-13 08:10:05.415031] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:14.196 [2024-07-13 08:10:05.415304] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.196 08:10:05 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:14.196 malloc0 00:23:14.196 08:10:05 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:14.453 08:10:05 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Hwd1pvVHUG 00:23:14.453 [2024-07-13 08:10:06.165145] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:14.453 08:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2000403 00:23:14.453 08:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:14.453 08:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:14.453 08:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2000403 /var/tmp/bdevperf.sock 00:23:14.453 08:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2000403 ']' 00:23:14.453 08:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.453 08:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:14.453 08:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.453 08:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:14.453 08:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:14.710 [2024-07-13 08:10:06.221775] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:14.710 [2024-07-13 08:10:06.221843] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2000403 ] 00:23:14.710 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.710 [2024-07-13 08:10:06.282874] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:14.710 [2024-07-13 08:10:06.374106] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.967 08:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:14.967 08:10:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:14.967 08:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Hwd1pvVHUG 00:23:15.224 08:10:06 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:15.224 [2024-07-13 08:10:06.951823] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.482 nvme0n1 00:23:15.482 08:10:07 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:15.482 Running I/O for 1 seconds... 00:23:16.853 00:23:16.853 Latency(us) 00:23:16.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.853 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:16.853 Verification LBA range: start 0x0 length 0x2000 00:23:16.853 nvme0n1 : 1.05 2179.52 8.51 0.00 0.00 57473.63 7718.68 82721.00 00:23:16.853 =================================================================================================================== 00:23:16.853 Total : 2179.52 8.51 0.00 0.00 57473.63 7718.68 82721.00 00:23:16.853 0 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 2000403 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2000403 ']' 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2000403 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2000403 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2000403' 00:23:16.853 killing process with pid 2000403 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2000403 00:23:16.853 Received shutdown signal, test time was about 1.000000 seconds 00:23:16.853 00:23:16.853 Latency(us) 00:23:16.853 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.853 =================================================================================================================== 00:23:16.853 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2000403 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 2000123 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2000123 ']' 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2000123 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2000123 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2000123' 00:23:16.853 killing process with pid 2000123 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2000123 00:23:16.853 [2024-07-13 08:10:08.502497] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:16.853 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2000123 00:23:17.112 08:10:08 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:17.112 08:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:17.112 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:17.112 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.112 08:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2000684 00:23:17.112 08:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:17.112 08:10:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2000684 00:23:17.112 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2000684 ']' 00:23:17.112 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.112 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.112 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.112 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.112 08:10:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.112 [2024-07-13 08:10:08.780359] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:17.112 [2024-07-13 08:10:08.780429] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.112 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.370 [2024-07-13 08:10:08.845604] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.370 [2024-07-13 08:10:08.929136] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.370 [2024-07-13 08:10:08.929205] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.370 [2024-07-13 08:10:08.929228] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.370 [2024-07-13 08:10:08.929239] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.370 [2024-07-13 08:10:08.929249] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.370 [2024-07-13 08:10:08.929274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.370 08:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.370 08:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:17.370 08:10:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:17.370 08:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:17.370 08:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.370 08:10:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:17.370 08:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:17.370 08:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:17.370 08:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.370 [2024-07-13 08:10:09.058927] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:17.370 malloc0 00:23:17.370 [2024-07-13 08:10:09.090352] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:17.370 [2024-07-13 08:10:09.090596] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:17.628 08:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:17.628 08:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=2000713 00:23:17.628 08:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:17.628 08:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 2000713 /var/tmp/bdevperf.sock 00:23:17.628 08:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2000713 ']' 00:23:17.628 08:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.628 08:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.628 08:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.628 08:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.628 08:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.628 [2024-07-13 08:10:09.159372] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:17.628 [2024-07-13 08:10:09.159433] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2000713 ] 00:23:17.628 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.628 [2024-07-13 08:10:09.220245] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.629 [2024-07-13 08:10:09.310524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.886 08:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.886 08:10:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:17.886 08:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Hwd1pvVHUG 00:23:18.144 08:10:09 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:18.401 [2024-07-13 08:10:09.917524] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.401 nvme0n1 00:23:18.401 08:10:10 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:18.401 Running I/O for 1 seconds... 00:23:19.772 00:23:19.772 Latency(us) 00:23:19.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:19.772 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:19.772 Verification LBA range: start 0x0 length 0x2000 00:23:19.772 nvme0n1 : 1.04 3015.32 11.78 0.00 0.00 41688.51 11068.30 65633.09 00:23:19.772 =================================================================================================================== 00:23:19.772 Total : 3015.32 11.78 0.00 0.00 41688.51 11068.30 65633.09 00:23:19.772 0 00:23:19.772 08:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:19.772 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:19.772 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.772 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:19.772 08:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:19.772 "subsystems": [ 00:23:19.772 { 00:23:19.772 "subsystem": "keyring", 00:23:19.772 "config": [ 00:23:19.772 { 00:23:19.772 "method": "keyring_file_add_key", 00:23:19.772 "params": { 00:23:19.772 "name": "key0", 00:23:19.772 "path": "/tmp/tmp.Hwd1pvVHUG" 00:23:19.772 } 00:23:19.772 } 00:23:19.772 ] 00:23:19.772 }, 00:23:19.772 { 00:23:19.772 "subsystem": "iobuf", 00:23:19.772 "config": [ 00:23:19.772 { 00:23:19.772 "method": "iobuf_set_options", 00:23:19.772 "params": { 00:23:19.772 "small_pool_count": 8192, 00:23:19.772 "large_pool_count": 1024, 00:23:19.772 "small_bufsize": 8192, 00:23:19.772 "large_bufsize": 135168 00:23:19.772 } 00:23:19.772 } 00:23:19.772 ] 00:23:19.772 }, 00:23:19.772 { 00:23:19.772 "subsystem": "sock", 00:23:19.772 "config": [ 00:23:19.772 { 00:23:19.772 "method": "sock_set_default_impl", 00:23:19.772 "params": { 00:23:19.772 "impl_name": "posix" 00:23:19.772 } 00:23:19.772 }, 00:23:19.772 { 00:23:19.772 "method": "sock_impl_set_options", 00:23:19.772 "params": { 00:23:19.772 "impl_name": "ssl", 00:23:19.772 "recv_buf_size": 4096, 00:23:19.772 "send_buf_size": 4096, 00:23:19.772 "enable_recv_pipe": true, 00:23:19.772 "enable_quickack": false, 00:23:19.772 "enable_placement_id": 0, 00:23:19.772 "enable_zerocopy_send_server": true, 00:23:19.772 "enable_zerocopy_send_client": false, 00:23:19.772 "zerocopy_threshold": 0, 00:23:19.772 "tls_version": 0, 00:23:19.772 "enable_ktls": false 00:23:19.772 } 00:23:19.772 }, 00:23:19.772 { 00:23:19.772 "method": "sock_impl_set_options", 00:23:19.772 "params": { 00:23:19.772 "impl_name": "posix", 00:23:19.772 "recv_buf_size": 2097152, 00:23:19.772 "send_buf_size": 2097152, 00:23:19.772 "enable_recv_pipe": true, 00:23:19.772 "enable_quickack": false, 00:23:19.772 "enable_placement_id": 0, 00:23:19.772 "enable_zerocopy_send_server": true, 00:23:19.772 "enable_zerocopy_send_client": false, 00:23:19.772 "zerocopy_threshold": 0, 00:23:19.772 "tls_version": 0, 00:23:19.772 "enable_ktls": false 00:23:19.772 } 00:23:19.772 } 00:23:19.772 ] 00:23:19.772 }, 00:23:19.772 { 00:23:19.772 "subsystem": "vmd", 00:23:19.772 "config": [] 00:23:19.772 }, 00:23:19.772 { 00:23:19.772 "subsystem": "accel", 00:23:19.772 "config": [ 00:23:19.772 { 00:23:19.772 "method": "accel_set_options", 00:23:19.772 "params": { 00:23:19.772 "small_cache_size": 128, 00:23:19.772 "large_cache_size": 16, 00:23:19.772 "task_count": 2048, 00:23:19.772 "sequence_count": 2048, 00:23:19.772 "buf_count": 2048 00:23:19.772 } 00:23:19.772 } 00:23:19.772 ] 00:23:19.772 }, 00:23:19.772 { 00:23:19.772 "subsystem": "bdev", 00:23:19.772 "config": [ 00:23:19.772 { 00:23:19.772 "method": "bdev_set_options", 00:23:19.772 "params": { 00:23:19.772 "bdev_io_pool_size": 65535, 00:23:19.772 "bdev_io_cache_size": 256, 00:23:19.772 "bdev_auto_examine": true, 00:23:19.772 "iobuf_small_cache_size": 128, 00:23:19.772 "iobuf_large_cache_size": 16 00:23:19.772 } 00:23:19.773 }, 00:23:19.773 { 00:23:19.773 "method": "bdev_raid_set_options", 00:23:19.773 "params": { 00:23:19.773 "process_window_size_kb": 1024 00:23:19.773 } 00:23:19.773 }, 00:23:19.773 { 00:23:19.773 "method": "bdev_iscsi_set_options", 00:23:19.773 "params": { 00:23:19.773 "timeout_sec": 30 00:23:19.773 } 00:23:19.773 }, 00:23:19.773 { 00:23:19.773 "method": "bdev_nvme_set_options", 00:23:19.773 "params": { 00:23:19.773 "action_on_timeout": "none", 00:23:19.773 "timeout_us": 0, 00:23:19.773 "timeout_admin_us": 0, 00:23:19.773 "keep_alive_timeout_ms": 10000, 00:23:19.773 "arbitration_burst": 0, 00:23:19.773 "low_priority_weight": 0, 00:23:19.773 "medium_priority_weight": 0, 00:23:19.773 "high_priority_weight": 0, 00:23:19.773 "nvme_adminq_poll_period_us": 10000, 00:23:19.773 "nvme_ioq_poll_period_us": 0, 00:23:19.773 "io_queue_requests": 0, 00:23:19.773 "delay_cmd_submit": true, 00:23:19.773 "transport_retry_count": 4, 00:23:19.773 "bdev_retry_count": 3, 00:23:19.773 "transport_ack_timeout": 0, 00:23:19.773 "ctrlr_loss_timeout_sec": 0, 00:23:19.773 "reconnect_delay_sec": 0, 00:23:19.773 "fast_io_fail_timeout_sec": 0, 00:23:19.773 "disable_auto_failback": false, 00:23:19.773 "generate_uuids": false, 00:23:19.773 "transport_tos": 0, 00:23:19.773 "nvme_error_stat": false, 00:23:19.773 "rdma_srq_size": 0, 00:23:19.773 "io_path_stat": false, 00:23:19.773 "allow_accel_sequence": false, 00:23:19.773 "rdma_max_cq_size": 0, 00:23:19.773 "rdma_cm_event_timeout_ms": 0, 00:23:19.773 "dhchap_digests": [ 00:23:19.773 "sha256", 00:23:19.773 "sha384", 00:23:19.773 "sha512" 00:23:19.773 ], 00:23:19.773 "dhchap_dhgroups": [ 00:23:19.773 "null", 00:23:19.773 "ffdhe2048", 00:23:19.773 "ffdhe3072", 00:23:19.773 "ffdhe4096", 00:23:19.773 "ffdhe6144", 00:23:19.773 "ffdhe8192" 00:23:19.773 ] 00:23:19.773 } 00:23:19.773 }, 00:23:19.773 { 00:23:19.773 "method": "bdev_nvme_set_hotplug", 00:23:19.773 "params": { 00:23:19.773 "period_us": 100000, 00:23:19.773 "enable": false 00:23:19.773 } 00:23:19.773 }, 00:23:19.773 { 00:23:19.773 "method": "bdev_malloc_create", 00:23:19.773 "params": { 00:23:19.773 "name": "malloc0", 00:23:19.773 "num_blocks": 8192, 00:23:19.773 "block_size": 4096, 00:23:19.773 "physical_block_size": 4096, 00:23:19.773 "uuid": "028ea9cc-6cf8-42cd-ad2e-91f0039d58e5", 00:23:19.773 "optimal_io_boundary": 0 00:23:19.773 } 00:23:19.773 }, 00:23:19.773 { 00:23:19.773 "method": "bdev_wait_for_examine" 00:23:19.773 } 00:23:19.773 ] 00:23:19.773 }, 00:23:19.773 { 00:23:19.773 "subsystem": "nbd", 00:23:19.773 "config": [] 00:23:19.773 }, 00:23:19.773 { 00:23:19.773 "subsystem": "scheduler", 00:23:19.773 "config": [ 00:23:19.773 { 00:23:19.773 "method": "framework_set_scheduler", 00:23:19.773 "params": { 00:23:19.773 "name": "static" 00:23:19.773 } 00:23:19.773 } 00:23:19.773 ] 00:23:19.773 }, 00:23:19.773 { 00:23:19.773 "subsystem": "nvmf", 00:23:19.773 "config": [ 00:23:19.773 { 00:23:19.773 "method": "nvmf_set_config", 00:23:19.773 "params": { 00:23:19.773 "discovery_filter": "match_any", 00:23:19.773 "admin_cmd_passthru": { 00:23:19.773 "identify_ctrlr": false 00:23:19.773 } 00:23:19.773 } 00:23:19.773 }, 00:23:19.773 { 00:23:19.773 "method": "nvmf_set_max_subsystems", 00:23:19.773 "params": { 00:23:19.773 "max_subsystems": 1024 00:23:19.773 } 00:23:19.773 }, 00:23:19.773 { 00:23:19.773 "method": "nvmf_set_crdt", 00:23:19.773 "params": { 00:23:19.773 "crdt1": 0, 00:23:19.773 "crdt2": 0, 00:23:19.773 "crdt3": 0 00:23:19.773 } 00:23:19.773 }, 00:23:19.773 { 00:23:19.773 "method": "nvmf_create_transport", 00:23:19.773 "params": { 00:23:19.773 "trtype": "TCP", 00:23:19.773 "max_queue_depth": 128, 00:23:19.773 "max_io_qpairs_per_ctrlr": 127, 00:23:19.773 "in_capsule_data_size": 4096, 00:23:19.773 "max_io_size": 131072, 00:23:19.773 "io_unit_size": 131072, 00:23:19.773 "max_aq_depth": 128, 00:23:19.773 "num_shared_buffers": 511, 00:23:19.773 "buf_cache_size": 4294967295, 00:23:19.773 "dif_insert_or_strip": false, 00:23:19.773 "zcopy": false, 00:23:19.773 "c2h_success": false, 00:23:19.773 "sock_priority": 0, 00:23:19.773 "abort_timeout_sec": 1, 00:23:19.773 "ack_timeout": 0, 00:23:19.773 "data_wr_pool_size": 0 00:23:19.773 } 00:23:19.773 }, 00:23:19.773 { 00:23:19.773 "method": "nvmf_create_subsystem", 00:23:19.773 "params": { 00:23:19.773 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.773 "allow_any_host": false, 00:23:19.773 "serial_number": "00000000000000000000", 00:23:19.773 "model_number": "SPDK bdev Controller", 00:23:19.773 "max_namespaces": 32, 00:23:19.773 "min_cntlid": 1, 00:23:19.773 "max_cntlid": 65519, 00:23:19.773 "ana_reporting": false 00:23:19.773 } 00:23:19.773 }, 00:23:19.773 { 00:23:19.773 "method": "nvmf_subsystem_add_host", 00:23:19.773 "params": { 00:23:19.773 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.773 "host": "nqn.2016-06.io.spdk:host1", 00:23:19.773 "psk": "key0" 00:23:19.773 } 00:23:19.773 }, 00:23:19.773 { 00:23:19.773 "method": "nvmf_subsystem_add_ns", 00:23:19.773 "params": { 00:23:19.773 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.773 "namespace": { 00:23:19.773 "nsid": 1, 00:23:19.773 "bdev_name": "malloc0", 00:23:19.773 "nguid": "028EA9CC6CF842CDAD2E91F0039D58E5", 00:23:19.773 "uuid": "028ea9cc-6cf8-42cd-ad2e-91f0039d58e5", 00:23:19.773 "no_auto_visible": false 00:23:19.773 } 00:23:19.773 } 00:23:19.773 }, 00:23:19.773 { 00:23:19.773 "method": "nvmf_subsystem_add_listener", 00:23:19.773 "params": { 00:23:19.773 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:19.773 "listen_address": { 00:23:19.773 "trtype": "TCP", 00:23:19.773 "adrfam": "IPv4", 00:23:19.773 "traddr": "10.0.0.2", 00:23:19.773 "trsvcid": "4420" 00:23:19.773 }, 00:23:19.773 "secure_channel": true 00:23:19.773 } 00:23:19.773 } 00:23:19.773 ] 00:23:19.773 } 00:23:19.773 ] 00:23:19.773 }' 00:23:19.773 08:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:20.031 08:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:20.031 "subsystems": [ 00:23:20.031 { 00:23:20.031 "subsystem": "keyring", 00:23:20.031 "config": [ 00:23:20.031 { 00:23:20.031 "method": "keyring_file_add_key", 00:23:20.031 "params": { 00:23:20.031 "name": "key0", 00:23:20.031 "path": "/tmp/tmp.Hwd1pvVHUG" 00:23:20.031 } 00:23:20.031 } 00:23:20.031 ] 00:23:20.031 }, 00:23:20.031 { 00:23:20.031 "subsystem": "iobuf", 00:23:20.031 "config": [ 00:23:20.031 { 00:23:20.031 "method": "iobuf_set_options", 00:23:20.031 "params": { 00:23:20.031 "small_pool_count": 8192, 00:23:20.031 "large_pool_count": 1024, 00:23:20.031 "small_bufsize": 8192, 00:23:20.031 "large_bufsize": 135168 00:23:20.031 } 00:23:20.031 } 00:23:20.031 ] 00:23:20.031 }, 00:23:20.031 { 00:23:20.031 "subsystem": "sock", 00:23:20.031 "config": [ 00:23:20.031 { 00:23:20.031 "method": "sock_set_default_impl", 00:23:20.031 "params": { 00:23:20.031 "impl_name": "posix" 00:23:20.031 } 00:23:20.031 }, 00:23:20.031 { 00:23:20.031 "method": "sock_impl_set_options", 00:23:20.031 "params": { 00:23:20.031 "impl_name": "ssl", 00:23:20.032 "recv_buf_size": 4096, 00:23:20.032 "send_buf_size": 4096, 00:23:20.032 "enable_recv_pipe": true, 00:23:20.032 "enable_quickack": false, 00:23:20.032 "enable_placement_id": 0, 00:23:20.032 "enable_zerocopy_send_server": true, 00:23:20.032 "enable_zerocopy_send_client": false, 00:23:20.032 "zerocopy_threshold": 0, 00:23:20.032 "tls_version": 0, 00:23:20.032 "enable_ktls": false 00:23:20.032 } 00:23:20.032 }, 00:23:20.032 { 00:23:20.032 "method": "sock_impl_set_options", 00:23:20.032 "params": { 00:23:20.032 "impl_name": "posix", 00:23:20.032 "recv_buf_size": 2097152, 00:23:20.032 "send_buf_size": 2097152, 00:23:20.032 "enable_recv_pipe": true, 00:23:20.032 "enable_quickack": false, 00:23:20.032 "enable_placement_id": 0, 00:23:20.032 "enable_zerocopy_send_server": true, 00:23:20.032 "enable_zerocopy_send_client": false, 00:23:20.032 "zerocopy_threshold": 0, 00:23:20.032 "tls_version": 0, 00:23:20.032 "enable_ktls": false 00:23:20.032 } 00:23:20.032 } 00:23:20.032 ] 00:23:20.032 }, 00:23:20.032 { 00:23:20.032 "subsystem": "vmd", 00:23:20.032 "config": [] 00:23:20.032 }, 00:23:20.032 { 00:23:20.032 "subsystem": "accel", 00:23:20.032 "config": [ 00:23:20.032 { 00:23:20.032 "method": "accel_set_options", 00:23:20.032 "params": { 00:23:20.032 "small_cache_size": 128, 00:23:20.032 "large_cache_size": 16, 00:23:20.032 "task_count": 2048, 00:23:20.032 "sequence_count": 2048, 00:23:20.032 "buf_count": 2048 00:23:20.032 } 00:23:20.032 } 00:23:20.032 ] 00:23:20.032 }, 00:23:20.032 { 00:23:20.032 "subsystem": "bdev", 00:23:20.032 "config": [ 00:23:20.032 { 00:23:20.032 "method": "bdev_set_options", 00:23:20.032 "params": { 00:23:20.032 "bdev_io_pool_size": 65535, 00:23:20.032 "bdev_io_cache_size": 256, 00:23:20.032 "bdev_auto_examine": true, 00:23:20.032 "iobuf_small_cache_size": 128, 00:23:20.032 "iobuf_large_cache_size": 16 00:23:20.032 } 00:23:20.032 }, 00:23:20.032 { 00:23:20.032 "method": "bdev_raid_set_options", 00:23:20.032 "params": { 00:23:20.032 "process_window_size_kb": 1024 00:23:20.032 } 00:23:20.032 }, 00:23:20.032 { 00:23:20.032 "method": "bdev_iscsi_set_options", 00:23:20.032 "params": { 00:23:20.032 "timeout_sec": 30 00:23:20.032 } 00:23:20.032 }, 00:23:20.032 { 00:23:20.032 "method": "bdev_nvme_set_options", 00:23:20.032 "params": { 00:23:20.032 "action_on_timeout": "none", 00:23:20.032 "timeout_us": 0, 00:23:20.032 "timeout_admin_us": 0, 00:23:20.032 "keep_alive_timeout_ms": 10000, 00:23:20.032 "arbitration_burst": 0, 00:23:20.032 "low_priority_weight": 0, 00:23:20.032 "medium_priority_weight": 0, 00:23:20.032 "high_priority_weight": 0, 00:23:20.032 "nvme_adminq_poll_period_us": 10000, 00:23:20.032 "nvme_ioq_poll_period_us": 0, 00:23:20.032 "io_queue_requests": 512, 00:23:20.032 "delay_cmd_submit": true, 00:23:20.032 "transport_retry_count": 4, 00:23:20.032 "bdev_retry_count": 3, 00:23:20.032 "transport_ack_timeout": 0, 00:23:20.032 "ctrlr_loss_timeout_sec": 0, 00:23:20.032 "reconnect_delay_sec": 0, 00:23:20.032 "fast_io_fail_timeout_sec": 0, 00:23:20.032 "disable_auto_failback": false, 00:23:20.032 "generate_uuids": false, 00:23:20.032 "transport_tos": 0, 00:23:20.032 "nvme_error_stat": false, 00:23:20.032 "rdma_srq_size": 0, 00:23:20.032 "io_path_stat": false, 00:23:20.032 "allow_accel_sequence": false, 00:23:20.032 "rdma_max_cq_size": 0, 00:23:20.032 "rdma_cm_event_timeout_ms": 0, 00:23:20.032 "dhchap_digests": [ 00:23:20.032 "sha256", 00:23:20.032 "sha384", 00:23:20.032 "sha512" 00:23:20.032 ], 00:23:20.032 "dhchap_dhgroups": [ 00:23:20.032 "null", 00:23:20.032 "ffdhe2048", 00:23:20.032 "ffdhe3072", 00:23:20.032 "ffdhe4096", 00:23:20.032 "ffdhe6144", 00:23:20.032 "ffdhe8192" 00:23:20.032 ] 00:23:20.032 } 00:23:20.032 }, 00:23:20.032 { 00:23:20.032 "method": "bdev_nvme_attach_controller", 00:23:20.032 "params": { 00:23:20.032 "name": "nvme0", 00:23:20.032 "trtype": "TCP", 00:23:20.032 "adrfam": "IPv4", 00:23:20.032 "traddr": "10.0.0.2", 00:23:20.032 "trsvcid": "4420", 00:23:20.032 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.032 "prchk_reftag": false, 00:23:20.032 "prchk_guard": false, 00:23:20.032 "ctrlr_loss_timeout_sec": 0, 00:23:20.032 "reconnect_delay_sec": 0, 00:23:20.032 "fast_io_fail_timeout_sec": 0, 00:23:20.032 "psk": "key0", 00:23:20.032 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.032 "hdgst": false, 00:23:20.032 "ddgst": false 00:23:20.032 } 00:23:20.032 }, 00:23:20.032 { 00:23:20.032 "method": "bdev_nvme_set_hotplug", 00:23:20.032 "params": { 00:23:20.032 "period_us": 100000, 00:23:20.032 "enable": false 00:23:20.032 } 00:23:20.032 }, 00:23:20.032 { 00:23:20.032 "method": "bdev_enable_histogram", 00:23:20.032 "params": { 00:23:20.032 "name": "nvme0n1", 00:23:20.032 "enable": true 00:23:20.032 } 00:23:20.032 }, 00:23:20.032 { 00:23:20.032 "method": "bdev_wait_for_examine" 00:23:20.032 } 00:23:20.032 ] 00:23:20.032 }, 00:23:20.032 { 00:23:20.032 "subsystem": "nbd", 00:23:20.032 "config": [] 00:23:20.032 } 00:23:20.032 ] 00:23:20.032 }' 00:23:20.032 08:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 2000713 00:23:20.032 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2000713 ']' 00:23:20.032 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2000713 00:23:20.032 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:20.032 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:20.032 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2000713 00:23:20.032 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:20.032 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:20.032 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2000713' 00:23:20.032 killing process with pid 2000713 00:23:20.032 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2000713 00:23:20.032 Received shutdown signal, test time was about 1.000000 seconds 00:23:20.032 00:23:20.032 Latency(us) 00:23:20.032 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.032 =================================================================================================================== 00:23:20.032 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:20.032 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2000713 00:23:20.290 08:10:11 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 2000684 00:23:20.290 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2000684 ']' 00:23:20.290 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2000684 00:23:20.290 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:20.290 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:20.290 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2000684 00:23:20.290 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:20.290 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:20.290 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2000684' 00:23:20.290 killing process with pid 2000684 00:23:20.290 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2000684 00:23:20.290 08:10:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2000684 00:23:20.548 08:10:12 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:20.548 08:10:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:20.548 08:10:12 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:20.548 "subsystems": [ 00:23:20.548 { 00:23:20.548 "subsystem": "keyring", 00:23:20.548 "config": [ 00:23:20.548 { 00:23:20.548 "method": "keyring_file_add_key", 00:23:20.548 "params": { 00:23:20.548 "name": "key0", 00:23:20.548 "path": "/tmp/tmp.Hwd1pvVHUG" 00:23:20.548 } 00:23:20.548 } 00:23:20.548 ] 00:23:20.548 }, 00:23:20.548 { 00:23:20.548 "subsystem": "iobuf", 00:23:20.548 "config": [ 00:23:20.548 { 00:23:20.548 "method": "iobuf_set_options", 00:23:20.548 "params": { 00:23:20.548 "small_pool_count": 8192, 00:23:20.548 "large_pool_count": 1024, 00:23:20.548 "small_bufsize": 8192, 00:23:20.548 "large_bufsize": 135168 00:23:20.548 } 00:23:20.548 } 00:23:20.548 ] 00:23:20.548 }, 00:23:20.548 { 00:23:20.548 "subsystem": "sock", 00:23:20.548 "config": [ 00:23:20.548 { 00:23:20.548 "method": "sock_set_default_impl", 00:23:20.548 "params": { 00:23:20.548 "impl_name": "posix" 00:23:20.548 } 00:23:20.548 }, 00:23:20.548 { 00:23:20.548 "method": "sock_impl_set_options", 00:23:20.548 "params": { 00:23:20.548 "impl_name": "ssl", 00:23:20.548 "recv_buf_size": 4096, 00:23:20.548 "send_buf_size": 4096, 00:23:20.548 "enable_recv_pipe": true, 00:23:20.548 "enable_quickack": false, 00:23:20.548 "enable_placement_id": 0, 00:23:20.548 "enable_zerocopy_send_server": true, 00:23:20.548 "enable_zerocopy_send_client": false, 00:23:20.548 "zerocopy_threshold": 0, 00:23:20.548 "tls_version": 0, 00:23:20.548 "enable_ktls": false 00:23:20.548 } 00:23:20.548 }, 00:23:20.548 { 00:23:20.548 "method": "sock_impl_set_options", 00:23:20.548 "params": { 00:23:20.548 "impl_name": "posix", 00:23:20.548 "recv_buf_size": 2097152, 00:23:20.548 "send_buf_size": 2097152, 00:23:20.548 "enable_recv_pipe": true, 00:23:20.548 "enable_quickack": false, 00:23:20.548 "enable_placement_id": 0, 00:23:20.548 "enable_zerocopy_send_server": true, 00:23:20.548 "enable_zerocopy_send_client": false, 00:23:20.548 "zerocopy_threshold": 0, 00:23:20.548 "tls_version": 0, 00:23:20.548 "enable_ktls": false 00:23:20.548 } 00:23:20.548 } 00:23:20.548 ] 00:23:20.548 }, 00:23:20.548 { 00:23:20.548 "subsystem": "vmd", 00:23:20.548 "config": [] 00:23:20.548 }, 00:23:20.548 { 00:23:20.548 "subsystem": "accel", 00:23:20.548 "config": [ 00:23:20.548 { 00:23:20.548 "method": "accel_set_options", 00:23:20.548 "params": { 00:23:20.548 "small_cache_size": 128, 00:23:20.548 "large_cache_size": 16, 00:23:20.548 "task_count": 2048, 00:23:20.548 "sequence_count": 2048, 00:23:20.548 "buf_count": 2048 00:23:20.548 } 00:23:20.548 } 00:23:20.548 ] 00:23:20.548 }, 00:23:20.548 { 00:23:20.548 "subsystem": "bdev", 00:23:20.548 "config": [ 00:23:20.548 { 00:23:20.548 "method": "bdev_set_options", 00:23:20.548 "params": { 00:23:20.548 "bdev_io_pool_size": 65535, 00:23:20.548 "bdev_io_cache_size": 256, 00:23:20.548 "bdev_auto_examine": true, 00:23:20.548 "iobuf_small_cache_size": 128, 00:23:20.548 "iobuf_large_cache_size": 16 00:23:20.548 } 00:23:20.548 }, 00:23:20.548 { 00:23:20.548 "method": "bdev_raid_set_options", 00:23:20.548 "params": { 00:23:20.548 "process_window_size_kb": 1024 00:23:20.548 } 00:23:20.548 }, 00:23:20.548 { 00:23:20.548 "method": "bdev_iscsi_set_options", 00:23:20.548 "params": { 00:23:20.548 "timeout_sec": 30 00:23:20.548 } 00:23:20.548 }, 00:23:20.548 { 00:23:20.548 "method": "bdev_nvme_set_options", 00:23:20.548 "params": { 00:23:20.548 "action_on_timeout": "none", 00:23:20.548 "timeout_us": 0, 00:23:20.548 "timeout_admin_us": 0, 00:23:20.548 "keep_alive_timeout_ms": 10000, 00:23:20.548 "arbitration_burst": 0, 00:23:20.548 "low_priority_weight": 0, 00:23:20.548 "medium_priority_weight": 0, 00:23:20.548 "high_priority_weight": 0, 00:23:20.548 "nvme_adminq_poll_period_us": 10000, 00:23:20.548 "nvme_ioq_poll_period_us": 0, 00:23:20.548 "io_queue_requests": 0, 00:23:20.548 "delay_cmd_submit": true, 00:23:20.548 "transport_retry_count": 4, 00:23:20.548 "bdev_retry_count": 3, 00:23:20.548 "transport_ack_timeout": 0, 00:23:20.548 "ctrlr_loss_timeout_sec": 0, 00:23:20.548 "reconnect_delay_sec": 0, 00:23:20.548 "fast_io_fail_timeout_sec": 0, 00:23:20.548 "disable_auto_failback": false, 00:23:20.548 "generate_uuids": false, 00:23:20.548 "transport_tos": 0, 00:23:20.548 "nvme_error_stat": false, 00:23:20.548 "rdma_srq_size": 0, 00:23:20.548 "io_path_stat": false, 00:23:20.548 "allow_accel_sequence": false, 00:23:20.549 "rdma_max_cq_size": 0, 00:23:20.549 "rdma_cm_event_timeout_ms": 0, 00:23:20.549 "dhchap_digests": [ 00:23:20.549 "sha256", 00:23:20.549 "sha384", 00:23:20.549 "sha512" 00:23:20.549 ], 00:23:20.549 "dhchap_dhgroups": [ 00:23:20.549 "null", 00:23:20.549 "ffdhe2048", 00:23:20.549 "ffdhe3072", 00:23:20.549 "ffdhe4096", 00:23:20.549 "ffdhe6144", 00:23:20.549 "ffdhe8192" 00:23:20.549 ] 00:23:20.549 } 00:23:20.549 }, 00:23:20.549 { 00:23:20.549 "method": "bdev_nvme_set_hotplug", 00:23:20.549 "params": { 00:23:20.549 "period_us": 100000, 00:23:20.549 "enable": false 00:23:20.549 } 00:23:20.549 }, 00:23:20.549 { 00:23:20.549 "method": "bdev_malloc_create", 00:23:20.549 "params": { 00:23:20.549 "name": "malloc0", 00:23:20.549 "num_blocks": 8192, 00:23:20.549 "block_size": 4096, 00:23:20.549 "physical_block_size": 4096, 00:23:20.549 "uuid": "028ea9cc-6cf8-42cd-ad2e-91f0039d58e5", 00:23:20.549 "optimal_io_boundary": 0 00:23:20.549 } 00:23:20.549 }, 00:23:20.549 { 00:23:20.549 "method": "bdev_wait_for_examine" 00:23:20.549 } 00:23:20.549 ] 00:23:20.549 }, 00:23:20.549 { 00:23:20.549 "subsystem": "nbd", 00:23:20.549 "config": [] 00:23:20.549 }, 00:23:20.549 { 00:23:20.549 "subsystem": "scheduler", 00:23:20.549 "config": [ 00:23:20.549 { 00:23:20.549 "method": "framework_set_scheduler", 00:23:20.549 "params": { 00:23:20.549 "name": "static" 00:23:20.549 } 00:23:20.549 } 00:23:20.549 ] 00:23:20.549 }, 00:23:20.549 { 00:23:20.549 "subsystem": "nvmf", 00:23:20.549 "config": [ 00:23:20.549 { 00:23:20.549 "method": "nvmf_set_config", 00:23:20.549 "params": { 00:23:20.549 "discovery_filter": "match_any", 00:23:20.549 "admin_cmd_passthru": { 00:23:20.549 "identify_ctrlr": false 00:23:20.549 } 00:23:20.549 } 00:23:20.549 }, 00:23:20.549 { 00:23:20.549 "method": "nvmf_set_max_subsystems", 00:23:20.549 "params": { 00:23:20.549 "max_subsystems": 1024 00:23:20.549 } 00:23:20.549 }, 00:23:20.549 { 00:23:20.549 "method": "nvmf_set_crdt", 00:23:20.549 "params": { 00:23:20.549 "crdt1": 0, 00:23:20.549 "crdt2": 0, 00:23:20.549 "crdt3": 0 00:23:20.549 } 00:23:20.549 }, 00:23:20.549 { 00:23:20.549 "method": "nvmf_create_transport", 00:23:20.549 "params": { 00:23:20.549 "trtype": "TCP", 00:23:20.549 "max_queue_depth": 128, 00:23:20.549 "max_io_qpairs_per_ctrlr": 127, 00:23:20.549 "in_capsule_data_size": 4096, 00:23:20.549 "max_io_size": 131072, 00:23:20.549 "io_unit_size": 131072, 00:23:20.549 "max_aq_depth": 128, 00:23:20.549 "num_shared_buffers": 511, 00:23:20.549 "buf_cache_size": 4294967295, 00:23:20.549 "dif_insert_or_strip": false, 00:23:20.549 "zcopy": false, 00:23:20.549 "c2h_success": false, 00:23:20.549 "sock_priority": 0, 00:23:20.549 "abort_timeout_sec": 1, 00:23:20.549 "ack_timeout": 0, 00:23:20.549 "data_wr_pool_size": 0 00:23:20.549 } 00:23:20.549 }, 00:23:20.549 { 00:23:20.549 "method": "nvmf_create_subsystem", 00:23:20.549 "params": { 00:23:20.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.549 "allow_any_host": false, 00:23:20.549 "serial_number": "00000000000000000000", 00:23:20.549 "model_number": "SPDK bdev Controller", 00:23:20.549 "max_namespaces": 32, 00:23:20.549 "min_cntlid": 1, 00:23:20.549 "max_cntlid": 65519, 00:23:20.549 "ana_reporting": false 00:23:20.549 } 00:23:20.549 }, 00:23:20.549 { 00:23:20.549 "method": "nvmf_subsystem_add_host", 00:23:20.549 "params": { 00:23:20.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.549 "host": "nqn.2016-06.io.spdk:host1", 00:23:20.549 "psk": "key0" 00:23:20.549 } 00:23:20.549 }, 00:23:20.549 { 00:23:20.549 "method": "nvmf_subsystem_add_ns", 00:23:20.549 "params": { 00:23:20.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.549 "namespace": { 00:23:20.549 "nsid": 1, 00:23:20.549 "bdev_name": "malloc0", 00:23:20.549 "nguid": "028EA9CC6CF842CDAD2E91F0039D58E5", 00:23:20.549 "uuid": "028ea9cc-6cf8-42cd-ad2e-91f0039d58e5", 00:23:20.549 "no_auto_visible": false 00:23:20.549 } 00:23:20.549 } 00:23:20.549 }, 00:23:20.549 { 00:23:20.549 "method": "nvmf_subsystem_add_listener", 00:23:20.549 "params": { 00:23:20.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.549 "listen_address": { 00:23:20.549 "trtype": "TCP", 00:23:20.549 "adrfam": "IPv4", 00:23:20.549 "traddr": "10.0.0.2", 00:23:20.549 "trsvcid": "4420" 00:23:20.549 }, 00:23:20.549 "secure_channel": true 00:23:20.549 } 00:23:20.549 } 00:23:20.549 ] 00:23:20.549 } 00:23:20.549 ] 00:23:20.549 }' 00:23:20.549 08:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:20.549 08:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.549 08:10:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2001114 00:23:20.549 08:10:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:20.549 08:10:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2001114 00:23:20.549 08:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2001114 ']' 00:23:20.549 08:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.549 08:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:20.549 08:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.549 08:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:20.549 08:10:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.549 [2024-07-13 08:10:12.218143] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:20.549 [2024-07-13 08:10:12.218262] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.549 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.807 [2024-07-13 08:10:12.284514] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.807 [2024-07-13 08:10:12.376718] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.807 [2024-07-13 08:10:12.376780] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.807 [2024-07-13 08:10:12.376807] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.807 [2024-07-13 08:10:12.376821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.807 [2024-07-13 08:10:12.376832] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.807 [2024-07-13 08:10:12.376941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:21.065 [2024-07-13 08:10:12.611927] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.065 [2024-07-13 08:10:12.643943] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:21.065 [2024-07-13 08:10:12.654066] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.631 08:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:21.631 08:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:21.631 08:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:21.631 08:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:21.631 08:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.631 08:10:13 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:21.631 08:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=2001268 00:23:21.631 08:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 2001268 /var/tmp/bdevperf.sock 00:23:21.631 08:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 2001268 ']' 00:23:21.631 08:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:21.631 08:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:21.631 08:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:21.631 08:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:21.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:21.631 08:10:13 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:21.631 "subsystems": [ 00:23:21.631 { 00:23:21.631 "subsystem": "keyring", 00:23:21.631 "config": [ 00:23:21.631 { 00:23:21.631 "method": "keyring_file_add_key", 00:23:21.631 "params": { 00:23:21.631 "name": "key0", 00:23:21.631 "path": "/tmp/tmp.Hwd1pvVHUG" 00:23:21.631 } 00:23:21.631 } 00:23:21.631 ] 00:23:21.631 }, 00:23:21.631 { 00:23:21.631 "subsystem": "iobuf", 00:23:21.631 "config": [ 00:23:21.631 { 00:23:21.631 "method": "iobuf_set_options", 00:23:21.631 "params": { 00:23:21.631 "small_pool_count": 8192, 00:23:21.631 "large_pool_count": 1024, 00:23:21.631 "small_bufsize": 8192, 00:23:21.631 "large_bufsize": 135168 00:23:21.631 } 00:23:21.631 } 00:23:21.631 ] 00:23:21.631 }, 00:23:21.631 { 00:23:21.631 "subsystem": "sock", 00:23:21.631 "config": [ 00:23:21.631 { 00:23:21.631 "method": "sock_set_default_impl", 00:23:21.631 "params": { 00:23:21.631 "impl_name": "posix" 00:23:21.631 } 00:23:21.631 }, 00:23:21.631 { 00:23:21.631 "method": "sock_impl_set_options", 00:23:21.631 "params": { 00:23:21.631 "impl_name": "ssl", 00:23:21.631 "recv_buf_size": 4096, 00:23:21.631 "send_buf_size": 4096, 00:23:21.631 "enable_recv_pipe": true, 00:23:21.631 "enable_quickack": false, 00:23:21.631 "enable_placement_id": 0, 00:23:21.631 "enable_zerocopy_send_server": true, 00:23:21.631 "enable_zerocopy_send_client": false, 00:23:21.631 "zerocopy_threshold": 0, 00:23:21.631 "tls_version": 0, 00:23:21.631 "enable_ktls": false 00:23:21.631 } 00:23:21.631 }, 00:23:21.631 { 00:23:21.631 "method": "sock_impl_set_options", 00:23:21.631 "params": { 00:23:21.631 "impl_name": "posix", 00:23:21.631 "recv_buf_size": 2097152, 00:23:21.631 "send_buf_size": 2097152, 00:23:21.631 "enable_recv_pipe": true, 00:23:21.631 "enable_quickack": false, 00:23:21.631 "enable_placement_id": 0, 00:23:21.631 "enable_zerocopy_send_server": true, 00:23:21.631 "enable_zerocopy_send_client": false, 00:23:21.631 "zerocopy_threshold": 0, 00:23:21.631 "tls_version": 0, 00:23:21.631 "enable_ktls": false 00:23:21.631 } 00:23:21.631 } 00:23:21.631 ] 00:23:21.631 }, 00:23:21.631 { 00:23:21.631 "subsystem": "vmd", 00:23:21.631 "config": [] 00:23:21.631 }, 00:23:21.631 { 00:23:21.631 "subsystem": "accel", 00:23:21.631 "config": [ 00:23:21.631 { 00:23:21.631 "method": "accel_set_options", 00:23:21.631 "params": { 00:23:21.631 "small_cache_size": 128, 00:23:21.631 "large_cache_size": 16, 00:23:21.631 "task_count": 2048, 00:23:21.631 "sequence_count": 2048, 00:23:21.631 "buf_count": 2048 00:23:21.631 } 00:23:21.631 } 00:23:21.631 ] 00:23:21.631 }, 00:23:21.631 { 00:23:21.631 "subsystem": "bdev", 00:23:21.631 "config": [ 00:23:21.631 { 00:23:21.631 "method": "bdev_set_options", 00:23:21.631 "params": { 00:23:21.631 "bdev_io_pool_size": 65535, 00:23:21.631 "bdev_io_cache_size": 256, 00:23:21.631 "bdev_auto_examine": true, 00:23:21.631 "iobuf_small_cache_size": 128, 00:23:21.631 "iobuf_large_cache_size": 16 00:23:21.631 } 00:23:21.631 }, 00:23:21.631 { 00:23:21.631 "method": "bdev_raid_set_options", 00:23:21.631 "params": { 00:23:21.631 "process_window_size_kb": 1024 00:23:21.631 } 00:23:21.631 }, 00:23:21.631 { 00:23:21.631 "method": "bdev_iscsi_set_options", 00:23:21.631 "params": { 00:23:21.631 "timeout_sec": 30 00:23:21.631 } 00:23:21.631 }, 00:23:21.631 { 00:23:21.631 "method": "bdev_nvme_set_options", 00:23:21.631 "params": { 00:23:21.631 "action_on_timeout": "none", 00:23:21.632 "timeout_us": 0, 00:23:21.632 "timeout_admin_us": 0, 00:23:21.632 "keep_alive_timeout_ms": 10000, 00:23:21.632 "arbitration_burst": 0, 00:23:21.632 "low_priority_weight": 0, 00:23:21.632 "medium_priority_weight": 0, 00:23:21.632 "high_priority_weight": 0, 00:23:21.632 "nvme_adminq_poll_period_us": 10000, 00:23:21.632 "nvme_ioq_poll_period_us": 0, 00:23:21.632 "io_queue_requests": 512, 00:23:21.632 "delay_cmd_submit": true, 00:23:21.632 "transport_retry_count": 4, 00:23:21.632 "bdev_retry_count": 3, 00:23:21.632 "transport_ack_timeout": 0, 00:23:21.632 "ctrlr_loss_timeout_sec": 0, 00:23:21.632 "reconnect_delay_sec": 0, 00:23:21.632 "fast_io_fail_timeout_sec": 0, 00:23:21.632 "disable_auto_failback": false, 00:23:21.632 "generate_uuids": false, 00:23:21.632 "transport_tos": 0, 00:23:21.632 "nvme_error_stat": false, 00:23:21.632 "rdma_srq_size": 0, 00:23:21.632 "io_path_stat": false, 00:23:21.632 "allow_accel_sequence": false, 00:23:21.632 "rdma_max_cq_size": 0, 00:23:21.632 "rdma_cm_event_timeout_ms": 0, 00:23:21.632 "dhchap_digests": [ 00:23:21.632 "sha256", 00:23:21.632 "sha384", 00:23:21.632 "sha512" 00:23:21.632 ], 00:23:21.632 "dhchap_dhgroups": [ 00:23:21.632 "null", 00:23:21.632 "ffdhe2048", 00:23:21.632 "ffdhe3072", 00:23:21.632 "ffdhe4096", 00:23:21.632 "ffdhe6144", 00:23:21.632 "ffdhe8192" 00:23:21.632 ] 00:23:21.632 } 00:23:21.632 }, 00:23:21.632 { 00:23:21.632 "method": "bdev_nvme_attach_controller", 00:23:21.632 "params": { 00:23:21.632 "name": "nvme0", 00:23:21.632 "trtype": "TCP", 00:23:21.632 "adrfam": "IPv4", 00:23:21.632 "traddr": "10.0.0.2", 00:23:21.632 "trsvcid": "4420", 00:23:21.632 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:21.632 "prchk_reftag": false, 00:23:21.632 "prchk_guard": false, 00:23:21.632 "ctrlr_loss_timeout_sec": 0, 00:23:21.632 "reconnect_delay_sec": 0, 00:23:21.632 "fast_io_fail_timeout_sec": 0, 00:23:21.632 "psk": "key0", 00:23:21.632 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:21.632 "hdgst": false, 00:23:21.632 "ddgst": false 00:23:21.632 } 00:23:21.632 }, 00:23:21.632 { 00:23:21.632 "method": "bdev_nvme_set_hotplug", 00:23:21.632 "params": { 00:23:21.632 "period_us": 100000, 00:23:21.632 "enable": false 00:23:21.632 } 00:23:21.632 }, 00:23:21.632 { 00:23:21.632 "method": "bdev_enable_histogram", 00:23:21.632 "params": { 00:23:21.632 "name": "nvme0n1", 00:23:21.632 "enable": true 00:23:21.632 } 00:23:21.632 }, 00:23:21.632 { 00:23:21.632 "method": "bdev_wait_for_examine" 00:23:21.632 } 00:23:21.632 ] 00:23:21.632 }, 00:23:21.632 { 00:23:21.632 "subsystem": "nbd", 00:23:21.632 "config": [] 00:23:21.632 } 00:23:21.632 ] 00:23:21.632 }' 00:23:21.632 08:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:21.632 08:10:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.632 [2024-07-13 08:10:13.223550] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:21.632 [2024-07-13 08:10:13.223642] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001268 ] 00:23:21.632 EAL: No free 2048 kB hugepages reported on node 1 00:23:21.632 [2024-07-13 08:10:13.285593] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.890 [2024-07-13 08:10:13.377775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.890 [2024-07-13 08:10:13.559966] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:22.823 08:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:22.823 08:10:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:22.823 08:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:22.823 08:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:22.823 08:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.823 08:10:14 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:22.823 Running I/O for 1 seconds... 00:23:24.196 00:23:24.196 Latency(us) 00:23:24.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.196 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:24.196 Verification LBA range: start 0x0 length 0x2000 00:23:24.196 nvme0n1 : 1.04 2827.90 11.05 0.00 0.00 44473.73 8301.23 76507.21 00:23:24.196 =================================================================================================================== 00:23:24.196 Total : 2827.90 11.05 0.00 0.00 44473.73 8301.23 76507.21 00:23:24.196 0 00:23:24.196 08:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:24.196 08:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:24.196 08:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:24.196 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:23:24.196 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:23:24.196 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:24.196 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:24.196 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:24.196 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:24.196 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:24.196 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:24.196 nvmf_trace.0 00:23:24.196 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:23:24.196 08:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 2001268 00:23:24.196 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2001268 ']' 00:23:24.196 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2001268 00:23:24.196 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:24.196 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:24.196 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2001268 00:23:24.196 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:24.196 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:24.196 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2001268' 00:23:24.196 killing process with pid 2001268 00:23:24.197 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2001268 00:23:24.197 Received shutdown signal, test time was about 1.000000 seconds 00:23:24.197 00:23:24.197 Latency(us) 00:23:24.197 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:24.197 =================================================================================================================== 00:23:24.197 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:24.197 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2001268 00:23:24.197 08:10:15 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:24.197 08:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:24.197 08:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:24.197 08:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:24.197 08:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:24.197 08:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:24.197 08:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:24.197 rmmod nvme_tcp 00:23:24.455 rmmod nvme_fabrics 00:23:24.455 rmmod nvme_keyring 00:23:24.455 08:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:24.455 08:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:24.455 08:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:24.455 08:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2001114 ']' 00:23:24.455 08:10:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2001114 00:23:24.455 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 2001114 ']' 00:23:24.455 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 2001114 00:23:24.455 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:24.455 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:24.455 08:10:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2001114 00:23:24.455 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:24.455 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:24.455 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2001114' 00:23:24.455 killing process with pid 2001114 00:23:24.455 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 2001114 00:23:24.455 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 2001114 00:23:24.714 08:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:24.714 08:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:24.714 08:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:24.714 08:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:24.714 08:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:24.714 08:10:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:24.714 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:24.714 08:10:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.664 08:10:18 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:26.664 08:10:18 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.G0CmMNDdTO /tmp/tmp.VOjR311FPg /tmp/tmp.Hwd1pvVHUG 00:23:26.664 00:23:26.664 real 1m19.004s 00:23:26.664 user 2m7.677s 00:23:26.664 sys 0m26.618s 00:23:26.664 08:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:26.664 08:10:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:26.664 ************************************ 00:23:26.664 END TEST nvmf_tls 00:23:26.664 ************************************ 00:23:26.664 08:10:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:26.664 08:10:18 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:26.664 08:10:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:26.664 08:10:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:26.664 08:10:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:26.664 ************************************ 00:23:26.664 START TEST nvmf_fips 00:23:26.664 ************************************ 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:26.664 * Looking for test storage... 00:23:26.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:26.664 08:10:18 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:26.922 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:26.922 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:26.922 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:26.922 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:26.923 Error setting digest 00:23:26.923 0002C3FC437F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:26.923 0002C3FC437F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:26.923 08:10:18 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:28.825 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:28.825 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:28.825 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:28.826 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:28.826 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:28.826 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:29.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:29.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:23:29.084 00:23:29.084 --- 10.0.0.2 ping statistics --- 00:23:29.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.084 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:29.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:29.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:23:29.084 00:23:29.084 --- 10.0.0.1 ping statistics --- 00:23:29.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:29.084 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2003506 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2003506 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2003506 ']' 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:29.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.084 08:10:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:29.084 [2024-07-13 08:10:20.732381] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:29.084 [2024-07-13 08:10:20.732481] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:29.084 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.344 [2024-07-13 08:10:20.819622] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.344 [2024-07-13 08:10:20.918738] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:29.344 [2024-07-13 08:10:20.918806] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:29.344 [2024-07-13 08:10:20.918856] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:29.344 [2024-07-13 08:10:20.918905] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:29.344 [2024-07-13 08:10:20.918926] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:29.344 [2024-07-13 08:10:20.918975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:29.344 08:10:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:29.344 08:10:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:29.344 08:10:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:29.344 08:10:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:29.344 08:10:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:29.344 08:10:21 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:29.344 08:10:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:29.344 08:10:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:29.344 08:10:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:29.344 08:10:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:29.344 08:10:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:29.344 08:10:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:29.344 08:10:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:29.344 08:10:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:29.910 [2024-07-13 08:10:21.342255] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:29.910 [2024-07-13 08:10:21.358244] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:29.910 [2024-07-13 08:10:21.358469] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:29.910 [2024-07-13 08:10:21.390107] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:29.910 malloc0 00:23:29.910 08:10:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:29.910 08:10:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2003652 00:23:29.910 08:10:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:29.911 08:10:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2003652 /var/tmp/bdevperf.sock 00:23:29.911 08:10:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 2003652 ']' 00:23:29.911 08:10:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:29.911 08:10:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:29.911 08:10:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:29.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:29.911 08:10:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:29.911 08:10:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:29.911 [2024-07-13 08:10:21.482958] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:23:29.911 [2024-07-13 08:10:21.483040] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2003652 ] 00:23:29.911 EAL: No free 2048 kB hugepages reported on node 1 00:23:29.911 [2024-07-13 08:10:21.543273] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:29.911 [2024-07-13 08:10:21.636305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.169 08:10:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:30.169 08:10:21 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:30.169 08:10:21 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:30.427 [2024-07-13 08:10:22.026377] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:30.427 [2024-07-13 08:10:22.026531] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:30.427 TLSTESTn1 00:23:30.427 08:10:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:30.685 Running I/O for 10 seconds... 00:23:40.650 00:23:40.650 Latency(us) 00:23:40.650 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.650 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:40.650 Verification LBA range: start 0x0 length 0x2000 00:23:40.650 TLSTESTn1 : 10.04 3136.45 12.25 0.00 0.00 40710.01 6796.33 71070.15 00:23:40.650 =================================================================================================================== 00:23:40.650 Total : 3136.45 12.25 0.00 0.00 40710.01 6796.33 71070.15 00:23:40.650 0 00:23:40.650 08:10:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:40.650 08:10:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:40.650 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:23:40.650 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:23:40.650 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:40.650 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:40.650 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:40.650 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:40.650 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:40.650 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:40.650 nvmf_trace.0 00:23:40.650 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:23:40.650 08:10:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2003652 00:23:40.650 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2003652 ']' 00:23:40.650 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2003652 00:23:40.650 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:23:40.908 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:40.908 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2003652 00:23:40.908 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:40.908 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:40.908 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2003652' 00:23:40.908 killing process with pid 2003652 00:23:40.908 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2003652 00:23:40.908 Received shutdown signal, test time was about 10.000000 seconds 00:23:40.908 00:23:40.908 Latency(us) 00:23:40.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.908 =================================================================================================================== 00:23:40.908 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:40.908 [2024-07-13 08:10:32.411475] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:40.908 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2003652 00:23:40.908 08:10:32 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:40.908 08:10:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:40.908 08:10:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:23:40.908 08:10:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:40.908 08:10:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:23:40.908 08:10:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:40.908 08:10:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:41.165 rmmod nvme_tcp 00:23:41.165 rmmod nvme_fabrics 00:23:41.165 rmmod nvme_keyring 00:23:41.165 08:10:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:41.165 08:10:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:23:41.166 08:10:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:23:41.166 08:10:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2003506 ']' 00:23:41.166 08:10:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2003506 00:23:41.166 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 2003506 ']' 00:23:41.166 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 2003506 00:23:41.166 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:23:41.166 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:41.166 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2003506 00:23:41.166 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:41.166 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:41.166 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2003506' 00:23:41.166 killing process with pid 2003506 00:23:41.166 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 2003506 00:23:41.166 [2024-07-13 08:10:32.735584] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:41.166 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 2003506 00:23:41.423 08:10:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:41.423 08:10:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:41.423 08:10:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:41.423 08:10:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:41.423 08:10:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:41.423 08:10:32 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:41.423 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:41.423 08:10:32 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.321 08:10:34 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:43.321 08:10:35 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:43.321 00:23:43.321 real 0m16.669s 00:23:43.321 user 0m21.042s 00:23:43.321 sys 0m6.173s 00:23:43.321 08:10:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:43.321 08:10:35 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:43.321 ************************************ 00:23:43.321 END TEST nvmf_fips 00:23:43.321 ************************************ 00:23:43.321 08:10:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:43.321 08:10:35 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:23:43.321 08:10:35 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:43.321 08:10:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:43.321 08:10:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:43.321 08:10:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:43.579 ************************************ 00:23:43.579 START TEST nvmf_fuzz 00:23:43.579 ************************************ 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:23:43.579 * Looking for test storage... 00:23:43.579 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.579 08:10:35 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:23:43.580 08:10:35 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:45.480 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:45.481 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:45.481 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:45.481 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:45.481 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:45.481 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:45.739 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:45.739 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:45.739 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:45.739 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.739 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:23:45.739 00:23:45.739 --- 10.0.0.2 ping statistics --- 00:23:45.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.739 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:23:45.739 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:45.739 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.739 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:23:45.739 00:23:45.739 --- 10.0.0.1 ping statistics --- 00:23:45.739 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.739 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:23:45.739 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.739 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:23:45.739 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:45.739 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.739 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:45.739 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:45.739 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.739 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:45.739 08:10:37 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:45.739 08:10:37 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2006897 00:23:45.739 08:10:37 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:45.739 08:10:37 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:45.739 08:10:37 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2006897 00:23:45.739 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 2006897 ']' 00:23:45.739 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.739 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:45.740 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.740 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:45.740 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:45.998 Malloc0 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:23:45.998 08:10:37 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:18.100 Fuzzing completed. Shutting down the fuzz application 00:24:18.100 00:24:18.100 Dumping successful admin opcodes: 00:24:18.100 8, 9, 10, 24, 00:24:18.100 Dumping successful io opcodes: 00:24:18.100 0, 9, 00:24:18.100 NS: 0x200003aeff00 I/O qp, Total commands completed: 463887, total successful commands: 2682, random_seed: 3876061184 00:24:18.100 NS: 0x200003aeff00 admin qp, Total commands completed: 56319, total successful commands: 447, random_seed: 3077910144 00:24:18.100 08:11:08 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:18.100 Fuzzing completed. Shutting down the fuzz application 00:24:18.100 00:24:18.100 Dumping successful admin opcodes: 00:24:18.100 24, 00:24:18.100 Dumping successful io opcodes: 00:24:18.100 00:24:18.100 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 4006747716 00:24:18.100 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 4006868049 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:18.100 rmmod nvme_tcp 00:24:18.100 rmmod nvme_fabrics 00:24:18.100 rmmod nvme_keyring 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 2006897 ']' 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 2006897 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 2006897 ']' 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 2006897 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2006897 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2006897' 00:24:18.100 killing process with pid 2006897 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 2006897 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 2006897 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:18.100 08:11:09 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.632 08:11:11 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:20.632 08:11:11 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:20.632 00:24:20.632 real 0m36.732s 00:24:20.632 user 0m50.470s 00:24:20.632 sys 0m15.291s 00:24:20.632 08:11:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:20.632 08:11:11 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:20.632 ************************************ 00:24:20.632 END TEST nvmf_fuzz 00:24:20.632 ************************************ 00:24:20.632 08:11:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:20.632 08:11:11 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:20.632 08:11:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:20.632 08:11:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:20.632 08:11:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:20.632 ************************************ 00:24:20.632 START TEST nvmf_multiconnection 00:24:20.632 ************************************ 00:24:20.632 08:11:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:20.632 * Looking for test storage... 00:24:20.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:20.632 08:11:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:20.632 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:20.632 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:20.632 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:20.632 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:20.633 08:11:11 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:22.534 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:22.534 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:22.534 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:22.535 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:22.535 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:22.535 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.535 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:24:22.535 00:24:22.535 --- 10.0.0.2 ping statistics --- 00:24:22.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.535 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:24:22.535 08:11:13 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:22.535 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.535 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:24:22.535 00:24:22.535 --- 10.0.0.1 ping statistics --- 00:24:22.535 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.535 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=2012501 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 2012501 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 2012501 ']' 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:22.535 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.535 [2024-07-13 08:11:14.076224] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:24:22.535 [2024-07-13 08:11:14.076307] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.535 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.535 [2024-07-13 08:11:14.149811] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:22.535 [2024-07-13 08:11:14.247041] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.535 [2024-07-13 08:11:14.247103] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.535 [2024-07-13 08:11:14.247120] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.535 [2024-07-13 08:11:14.247134] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.535 [2024-07-13 08:11:14.247147] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.535 [2024-07-13 08:11:14.247217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:22.535 [2024-07-13 08:11:14.247245] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.535 [2024-07-13 08:11:14.247367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:22.535 [2024-07-13 08:11:14.247370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.794 [2024-07-13 08:11:14.400512] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.794 Malloc1 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.794 [2024-07-13 08:11:14.457570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.794 Malloc2 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.794 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:22.795 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.795 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.795 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.795 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:22.795 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.795 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.795 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.795 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:22.795 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.795 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:22.795 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:22.795 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:22.795 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:22.795 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:22.795 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.054 Malloc3 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.054 Malloc4 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.054 Malloc5 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.054 Malloc6 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.054 Malloc7 00:24:23.054 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.055 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:23.055 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.055 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.055 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.055 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:23.055 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.055 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.055 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.055 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:23.055 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.055 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.055 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.055 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.055 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:23.055 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.055 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.055 Malloc8 00:24:23.055 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.055 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:23.055 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.055 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.055 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.312 Malloc9 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.312 Malloc10 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.312 Malloc11 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:23.312 08:11:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:23.876 08:11:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:23.876 08:11:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:23.876 08:11:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:23.876 08:11:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:23.876 08:11:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:26.423 08:11:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:26.423 08:11:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:26.423 08:11:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:24:26.423 08:11:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:26.423 08:11:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:26.423 08:11:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:26.423 08:11:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:26.423 08:11:17 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:26.680 08:11:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:26.680 08:11:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:26.680 08:11:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:26.680 08:11:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:26.680 08:11:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:28.576 08:11:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:28.576 08:11:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:28.576 08:11:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:24:28.576 08:11:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:28.576 08:11:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:28.576 08:11:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:28.576 08:11:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:28.576 08:11:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:29.510 08:11:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:29.511 08:11:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:29.511 08:11:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:29.511 08:11:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:29.511 08:11:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:31.407 08:11:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:31.407 08:11:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:31.407 08:11:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:24:31.407 08:11:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:31.408 08:11:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:31.408 08:11:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:31.408 08:11:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:31.408 08:11:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:31.980 08:11:23 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:31.980 08:11:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:31.980 08:11:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:31.980 08:11:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:31.980 08:11:23 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:34.559 08:11:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:34.559 08:11:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:34.559 08:11:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:24:34.559 08:11:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:34.559 08:11:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:34.559 08:11:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:34.559 08:11:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:34.559 08:11:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:34.816 08:11:26 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:34.816 08:11:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:34.816 08:11:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:34.816 08:11:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:34.816 08:11:26 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:36.715 08:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:36.715 08:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:36.715 08:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:24:36.715 08:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:36.715 08:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:36.715 08:11:28 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:36.715 08:11:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:36.715 08:11:28 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:24:37.648 08:11:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:24:37.648 08:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:37.648 08:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:37.648 08:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:37.648 08:11:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:39.540 08:11:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:39.540 08:11:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:39.540 08:11:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:24:39.540 08:11:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:39.540 08:11:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:39.540 08:11:31 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:39.540 08:11:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:39.540 08:11:31 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:24:40.471 08:11:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:24:40.471 08:11:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:40.471 08:11:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:40.472 08:11:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:40.472 08:11:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:42.366 08:11:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:42.366 08:11:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:42.366 08:11:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:24:42.366 08:11:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:42.366 08:11:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:42.366 08:11:34 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:42.366 08:11:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:42.366 08:11:34 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:24:43.299 08:11:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:24:43.299 08:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:43.299 08:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:43.299 08:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:43.299 08:11:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:45.825 08:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:45.825 08:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:45.825 08:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:24:45.825 08:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:45.825 08:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:45.825 08:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:45.825 08:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:45.825 08:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:24:46.390 08:11:37 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:24:46.390 08:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:46.390 08:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:46.390 08:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:46.390 08:11:37 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:48.288 08:11:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:48.288 08:11:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:48.288 08:11:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:24:48.288 08:11:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:48.288 08:11:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:48.288 08:11:39 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:48.288 08:11:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:48.288 08:11:39 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:24:49.221 08:11:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:24:49.221 08:11:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:49.221 08:11:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:49.221 08:11:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:49.221 08:11:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:51.782 08:11:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:51.782 08:11:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:51.782 08:11:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:24:51.782 08:11:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:51.782 08:11:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:51.782 08:11:42 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:51.782 08:11:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:51.782 08:11:42 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:24:52.040 08:11:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:24:52.040 08:11:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:52.040 08:11:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:52.040 08:11:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:52.040 08:11:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:53.935 08:11:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:53.935 08:11:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:53.935 08:11:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:24:53.935 08:11:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:53.935 08:11:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:53.935 08:11:45 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:53.936 08:11:45 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:24:53.936 [global] 00:24:53.936 thread=1 00:24:53.936 invalidate=1 00:24:53.936 rw=read 00:24:53.936 time_based=1 00:24:53.936 runtime=10 00:24:53.936 ioengine=libaio 00:24:53.936 direct=1 00:24:53.936 bs=262144 00:24:53.936 iodepth=64 00:24:53.936 norandommap=1 00:24:53.936 numjobs=1 00:24:53.936 00:24:53.936 [job0] 00:24:53.936 filename=/dev/nvme0n1 00:24:53.936 [job1] 00:24:53.936 filename=/dev/nvme10n1 00:24:53.936 [job2] 00:24:53.936 filename=/dev/nvme1n1 00:24:53.936 [job3] 00:24:53.936 filename=/dev/nvme2n1 00:24:53.936 [job4] 00:24:53.936 filename=/dev/nvme3n1 00:24:53.936 [job5] 00:24:53.936 filename=/dev/nvme4n1 00:24:53.936 [job6] 00:24:53.936 filename=/dev/nvme5n1 00:24:53.936 [job7] 00:24:53.936 filename=/dev/nvme6n1 00:24:53.936 [job8] 00:24:53.936 filename=/dev/nvme7n1 00:24:53.936 [job9] 00:24:53.936 filename=/dev/nvme8n1 00:24:54.193 [job10] 00:24:54.193 filename=/dev/nvme9n1 00:24:54.193 Could not set queue depth (nvme0n1) 00:24:54.193 Could not set queue depth (nvme10n1) 00:24:54.193 Could not set queue depth (nvme1n1) 00:24:54.193 Could not set queue depth (nvme2n1) 00:24:54.193 Could not set queue depth (nvme3n1) 00:24:54.193 Could not set queue depth (nvme4n1) 00:24:54.193 Could not set queue depth (nvme5n1) 00:24:54.193 Could not set queue depth (nvme6n1) 00:24:54.193 Could not set queue depth (nvme7n1) 00:24:54.193 Could not set queue depth (nvme8n1) 00:24:54.193 Could not set queue depth (nvme9n1) 00:24:54.451 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.451 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.451 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.451 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.451 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.451 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.451 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.451 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.451 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.451 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.451 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:24:54.451 fio-3.35 00:24:54.451 Starting 11 threads 00:25:06.653 00:25:06.653 job0: (groupid=0, jobs=1): err= 0: pid=2016765: Sat Jul 13 08:11:56 2024 00:25:06.653 read: IOPS=817, BW=204MiB/s (214MB/s)(2075MiB/10149msec) 00:25:06.653 slat (usec): min=8, max=140712, avg=781.60, stdev=4334.49 00:25:06.653 clat (usec): min=1828, max=461603, avg=77397.86, stdev=57198.14 00:25:06.653 lat (usec): min=1845, max=602315, avg=78179.46, stdev=57919.24 00:25:06.653 clat percentiles (msec): 00:25:06.653 | 1.00th=[ 5], 5.00th=[ 16], 10.00th=[ 24], 20.00th=[ 37], 00:25:06.653 | 30.00th=[ 47], 40.00th=[ 56], 50.00th=[ 66], 60.00th=[ 75], 00:25:06.653 | 70.00th=[ 88], 80.00th=[ 103], 90.00th=[ 148], 95.00th=[ 199], 00:25:06.653 | 99.00th=[ 249], 99.50th=[ 401], 99.90th=[ 430], 99.95th=[ 464], 00:25:06.653 | 99.99th=[ 464] 00:25:06.653 bw ( KiB/s): min=67584, max=320894, per=11.82%, avg=210860.70, stdev=82870.06, samples=20 00:25:06.653 iops : min= 264, max= 1253, avg=823.65, stdev=323.68, samples=20 00:25:06.653 lat (msec) : 2=0.01%, 4=0.41%, 10=2.55%, 20=4.60%, 50=26.08% 00:25:06.653 lat (msec) : 100=45.50%, 250=19.84%, 500=1.00% 00:25:06.653 cpu : usr=0.36%, sys=2.01%, ctx=1738, majf=0, minf=3721 00:25:06.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:06.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.653 issued rwts: total=8301,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.653 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.653 job1: (groupid=0, jobs=1): err= 0: pid=2016766: Sat Jul 13 08:11:56 2024 00:25:06.653 read: IOPS=737, BW=184MiB/s (193MB/s)(1866MiB/10123msec) 00:25:06.653 slat (usec): min=8, max=157798, avg=767.09, stdev=4477.09 00:25:06.653 clat (usec): min=1112, max=362429, avg=85958.59, stdev=59104.99 00:25:06.653 lat (usec): min=1149, max=362478, avg=86725.68, stdev=59655.90 00:25:06.653 clat percentiles (msec): 00:25:06.653 | 1.00th=[ 3], 5.00th=[ 9], 10.00th=[ 21], 20.00th=[ 34], 00:25:06.653 | 30.00th=[ 47], 40.00th=[ 62], 50.00th=[ 78], 60.00th=[ 89], 00:25:06.653 | 70.00th=[ 105], 80.00th=[ 134], 90.00th=[ 167], 95.00th=[ 209], 00:25:06.653 | 99.00th=[ 259], 99.50th=[ 275], 99.90th=[ 284], 99.95th=[ 284], 00:25:06.653 | 99.99th=[ 363] 00:25:06.653 bw ( KiB/s): min=68096, max=433152, per=10.62%, avg=189444.00, stdev=83578.82, samples=20 00:25:06.653 iops : min= 266, max= 1692, avg=740.00, stdev=326.47, samples=20 00:25:06.653 lat (msec) : 2=0.17%, 4=2.72%, 10=2.55%, 20=4.26%, 50=22.01% 00:25:06.653 lat (msec) : 100=35.40%, 250=31.61%, 500=1.29% 00:25:06.653 cpu : usr=0.32%, sys=1.83%, ctx=1950, majf=0, minf=4097 00:25:06.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:06.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.653 issued rwts: total=7464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.653 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.653 job2: (groupid=0, jobs=1): err= 0: pid=2016767: Sat Jul 13 08:11:56 2024 00:25:06.653 read: IOPS=696, BW=174MiB/s (183MB/s)(1768MiB/10149msec) 00:25:06.653 slat (usec): min=9, max=180005, avg=639.24, stdev=4967.11 00:25:06.653 clat (usec): min=1937, max=514581, avg=91144.12, stdev=73557.34 00:25:06.653 lat (usec): min=1963, max=514610, avg=91783.36, stdev=74169.25 00:25:06.653 clat percentiles (msec): 00:25:06.653 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 14], 20.00th=[ 26], 00:25:06.653 | 30.00th=[ 42], 40.00th=[ 54], 50.00th=[ 72], 60.00th=[ 97], 00:25:06.653 | 70.00th=[ 115], 80.00th=[ 148], 90.00th=[ 203], 95.00th=[ 232], 00:25:06.653 | 99.00th=[ 296], 99.50th=[ 338], 99.90th=[ 493], 99.95th=[ 493], 00:25:06.653 | 99.99th=[ 514] 00:25:06.653 bw ( KiB/s): min=66180, max=422400, per=10.06%, avg=179402.35, stdev=81707.97, samples=20 00:25:06.653 iops : min= 258, max= 1650, avg=700.75, stdev=319.23, samples=20 00:25:06.653 lat (msec) : 2=0.03%, 4=0.54%, 10=3.29%, 20=13.38%, 50=19.03% 00:25:06.653 lat (msec) : 100=25.61%, 250=34.80%, 500=3.29%, 750=0.03% 00:25:06.653 cpu : usr=0.20%, sys=1.95%, ctx=1917, majf=0, minf=4097 00:25:06.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:06.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.653 issued rwts: total=7072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.653 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.653 job3: (groupid=0, jobs=1): err= 0: pid=2016768: Sat Jul 13 08:11:56 2024 00:25:06.653 read: IOPS=577, BW=144MiB/s (151MB/s)(1465MiB/10152msec) 00:25:06.653 slat (usec): min=9, max=269228, avg=1078.96, stdev=7082.93 00:25:06.653 clat (msec): min=2, max=472, avg=109.75, stdev=73.78 00:25:06.653 lat (msec): min=2, max=704, avg=110.83, stdev=74.99 00:25:06.653 clat percentiles (msec): 00:25:06.653 | 1.00th=[ 14], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 49], 00:25:06.653 | 30.00th=[ 64], 40.00th=[ 77], 50.00th=[ 89], 60.00th=[ 111], 00:25:06.653 | 70.00th=[ 134], 80.00th=[ 167], 90.00th=[ 209], 95.00th=[ 234], 00:25:06.653 | 99.00th=[ 393], 99.50th=[ 422], 99.90th=[ 443], 99.95th=[ 468], 00:25:06.653 | 99.99th=[ 472] 00:25:06.653 bw ( KiB/s): min=38912, max=329216, per=8.32%, avg=148318.80, stdev=75590.82, samples=20 00:25:06.653 iops : min= 152, max= 1286, avg=579.35, stdev=295.30, samples=20 00:25:06.653 lat (msec) : 4=0.02%, 10=0.29%, 20=3.04%, 50=18.59%, 100=33.10% 00:25:06.653 lat (msec) : 250=41.69%, 500=3.28% 00:25:06.653 cpu : usr=0.22%, sys=1.42%, ctx=1401, majf=0, minf=4097 00:25:06.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:06.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.653 issued rwts: total=5858,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.653 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.653 job4: (groupid=0, jobs=1): err= 0: pid=2016769: Sat Jul 13 08:11:56 2024 00:25:06.653 read: IOPS=495, BW=124MiB/s (130MB/s)(1242MiB/10017msec) 00:25:06.653 slat (usec): min=9, max=138397, avg=1724.06, stdev=6258.17 00:25:06.653 clat (msec): min=2, max=513, avg=127.23, stdev=72.50 00:25:06.653 lat (msec): min=2, max=513, avg=128.95, stdev=73.57 00:25:06.653 clat percentiles (msec): 00:25:06.653 | 1.00th=[ 14], 5.00th=[ 44], 10.00th=[ 56], 20.00th=[ 67], 00:25:06.653 | 30.00th=[ 80], 40.00th=[ 95], 50.00th=[ 110], 60.00th=[ 124], 00:25:06.653 | 70.00th=[ 161], 80.00th=[ 192], 90.00th=[ 220], 95.00th=[ 241], 00:25:06.653 | 99.00th=[ 426], 99.50th=[ 439], 99.90th=[ 472], 99.95th=[ 493], 00:25:06.653 | 99.99th=[ 514] 00:25:06.653 bw ( KiB/s): min=39936, max=238592, per=7.04%, avg=125560.55, stdev=58466.51, samples=20 00:25:06.653 iops : min= 156, max= 932, avg=490.45, stdev=228.40, samples=20 00:25:06.653 lat (msec) : 4=0.12%, 10=0.56%, 20=1.57%, 50=4.99%, 100=35.99% 00:25:06.653 lat (msec) : 250=53.12%, 500=3.60%, 750=0.04% 00:25:06.653 cpu : usr=0.26%, sys=1.70%, ctx=1178, majf=0, minf=4097 00:25:06.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:25:06.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.653 issued rwts: total=4968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.653 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.653 job5: (groupid=0, jobs=1): err= 0: pid=2016770: Sat Jul 13 08:11:56 2024 00:25:06.653 read: IOPS=465, BW=116MiB/s (122MB/s)(1183MiB/10153msec) 00:25:06.653 slat (usec): min=9, max=148141, avg=1486.94, stdev=6914.74 00:25:06.653 clat (usec): min=1509, max=467187, avg=135785.42, stdev=76169.91 00:25:06.653 lat (usec): min=1524, max=521878, avg=137272.36, stdev=77530.24 00:25:06.653 clat percentiles (msec): 00:25:06.653 | 1.00th=[ 8], 5.00th=[ 23], 10.00th=[ 37], 20.00th=[ 71], 00:25:06.653 | 30.00th=[ 95], 40.00th=[ 112], 50.00th=[ 128], 60.00th=[ 142], 00:25:06.653 | 70.00th=[ 180], 80.00th=[ 199], 90.00th=[ 226], 95.00th=[ 264], 00:25:06.653 | 99.00th=[ 384], 99.50th=[ 426], 99.90th=[ 460], 99.95th=[ 468], 00:25:06.653 | 99.99th=[ 468] 00:25:06.653 bw ( KiB/s): min=38912, max=240128, per=6.70%, avg=119456.60, stdev=58449.45, samples=20 00:25:06.653 iops : min= 152, max= 938, avg=466.60, stdev=228.34, samples=20 00:25:06.653 lat (msec) : 2=0.06%, 4=0.42%, 10=1.06%, 20=2.54%, 50=9.64% 00:25:06.653 lat (msec) : 100=19.89%, 250=59.98%, 500=6.41% 00:25:06.653 cpu : usr=0.29%, sys=1.39%, ctx=1373, majf=0, minf=4097 00:25:06.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:06.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.654 issued rwts: total=4730,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.654 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.654 job6: (groupid=0, jobs=1): err= 0: pid=2016771: Sat Jul 13 08:11:56 2024 00:25:06.654 read: IOPS=697, BW=174MiB/s (183MB/s)(1747MiB/10015msec) 00:25:06.654 slat (usec): min=8, max=311330, avg=792.28, stdev=6688.31 00:25:06.654 clat (usec): min=883, max=569417, avg=90852.06, stdev=75125.00 00:25:06.654 lat (usec): min=905, max=582733, avg=91644.34, stdev=76222.20 00:25:06.654 clat percentiles (msec): 00:25:06.654 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 14], 20.00th=[ 31], 00:25:06.654 | 30.00th=[ 46], 40.00th=[ 58], 50.00th=[ 71], 60.00th=[ 84], 00:25:06.654 | 70.00th=[ 110], 80.00th=[ 150], 90.00th=[ 197], 95.00th=[ 228], 00:25:06.654 | 99.00th=[ 292], 99.50th=[ 451], 99.90th=[ 481], 99.95th=[ 506], 00:25:06.654 | 99.99th=[ 567] 00:25:06.654 bw ( KiB/s): min=44544, max=339968, per=9.94%, avg=177276.40, stdev=83910.54, samples=20 00:25:06.654 iops : min= 174, max= 1328, avg=692.45, stdev=327.72, samples=20 00:25:06.654 lat (usec) : 1000=0.04% 00:25:06.654 lat (msec) : 2=0.14%, 4=1.23%, 10=6.32%, 20=6.75%, 50=18.84% 00:25:06.654 lat (msec) : 100=33.44%, 250=30.91%, 500=2.26%, 750=0.06% 00:25:06.654 cpu : usr=0.39%, sys=1.92%, ctx=1826, majf=0, minf=4097 00:25:06.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:25:06.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.654 issued rwts: total=6989,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.654 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.654 job7: (groupid=0, jobs=1): err= 0: pid=2016772: Sat Jul 13 08:11:56 2024 00:25:06.654 read: IOPS=589, BW=147MiB/s (155MB/s)(1498MiB/10153msec) 00:25:06.654 slat (usec): min=9, max=267229, avg=1000.64, stdev=7006.85 00:25:06.654 clat (usec): min=954, max=452892, avg=107380.34, stdev=76395.83 00:25:06.654 lat (usec): min=1010, max=452920, avg=108380.98, stdev=77264.23 00:25:06.654 clat percentiles (msec): 00:25:06.654 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 21], 20.00th=[ 35], 00:25:06.654 | 30.00th=[ 62], 40.00th=[ 80], 50.00th=[ 96], 60.00th=[ 111], 00:25:06.654 | 70.00th=[ 134], 80.00th=[ 178], 90.00th=[ 213], 95.00th=[ 234], 00:25:06.654 | 99.00th=[ 380], 99.50th=[ 435], 99.90th=[ 447], 99.95th=[ 451], 00:25:06.654 | 99.99th=[ 451] 00:25:06.654 bw ( KiB/s): min=68608, max=238080, per=8.50%, avg=151685.90, stdev=47740.61, samples=20 00:25:06.654 iops : min= 268, max= 930, avg=592.50, stdev=186.46, samples=20 00:25:06.654 lat (usec) : 1000=0.03% 00:25:06.654 lat (msec) : 2=0.68%, 4=0.23%, 10=3.56%, 20=5.19%, 50=16.58% 00:25:06.654 lat (msec) : 100=26.93%, 250=44.16%, 500=2.64% 00:25:06.654 cpu : usr=0.27%, sys=1.57%, ctx=1559, majf=0, minf=4097 00:25:06.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:06.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.654 issued rwts: total=5990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.654 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.654 job8: (groupid=0, jobs=1): err= 0: pid=2016773: Sat Jul 13 08:11:56 2024 00:25:06.654 read: IOPS=582, BW=146MiB/s (153MB/s)(1479MiB/10153msec) 00:25:06.654 slat (usec): min=9, max=149377, avg=1318.28, stdev=6248.88 00:25:06.654 clat (msec): min=2, max=506, avg=108.43, stdev=72.73 00:25:06.654 lat (msec): min=2, max=529, avg=109.75, stdev=73.53 00:25:06.654 clat percentiles (msec): 00:25:06.654 | 1.00th=[ 6], 5.00th=[ 12], 10.00th=[ 26], 20.00th=[ 46], 00:25:06.654 | 30.00th=[ 70], 40.00th=[ 84], 50.00th=[ 95], 60.00th=[ 109], 00:25:06.654 | 70.00th=[ 133], 80.00th=[ 165], 90.00th=[ 211], 95.00th=[ 234], 00:25:06.654 | 99.00th=[ 334], 99.50th=[ 414], 99.90th=[ 493], 99.95th=[ 506], 00:25:06.654 | 99.99th=[ 506] 00:25:06.654 bw ( KiB/s): min=76800, max=238080, per=8.40%, avg=149834.85, stdev=41231.50, samples=20 00:25:06.654 iops : min= 300, max= 930, avg=585.25, stdev=161.06, samples=20 00:25:06.654 lat (msec) : 4=0.30%, 10=3.53%, 20=3.97%, 50=13.66%, 100=32.86% 00:25:06.654 lat (msec) : 250=42.56%, 500=3.06%, 750=0.05% 00:25:06.654 cpu : usr=0.38%, sys=1.77%, ctx=1354, majf=0, minf=4097 00:25:06.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:06.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.654 issued rwts: total=5916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.654 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.654 job9: (groupid=0, jobs=1): err= 0: pid=2016774: Sat Jul 13 08:11:56 2024 00:25:06.654 read: IOPS=859, BW=215MiB/s (225MB/s)(2181MiB/10150msec) 00:25:06.654 slat (usec): min=9, max=138135, avg=1023.32, stdev=3993.77 00:25:06.654 clat (usec): min=1790, max=515562, avg=73378.19, stdev=50403.32 00:25:06.654 lat (usec): min=1811, max=557325, avg=74401.51, stdev=50997.03 00:25:06.654 clat percentiles (msec): 00:25:06.654 | 1.00th=[ 7], 5.00th=[ 25], 10.00th=[ 33], 20.00th=[ 38], 00:25:06.654 | 30.00th=[ 45], 40.00th=[ 60], 50.00th=[ 69], 60.00th=[ 75], 00:25:06.654 | 70.00th=[ 84], 80.00th=[ 95], 90.00th=[ 115], 95.00th=[ 140], 00:25:06.654 | 99.00th=[ 284], 99.50th=[ 414], 99.90th=[ 485], 99.95th=[ 485], 00:25:06.654 | 99.99th=[ 514] 00:25:06.654 bw ( KiB/s): min=73216, max=411648, per=12.43%, avg=221717.10, stdev=75291.43, samples=20 00:25:06.654 iops : min= 286, max= 1608, avg=866.05, stdev=294.07, samples=20 00:25:06.654 lat (msec) : 2=0.05%, 4=0.39%, 10=1.65%, 20=1.64%, 50=29.84% 00:25:06.654 lat (msec) : 100=49.43%, 250=15.64%, 500=1.35%, 750=0.02% 00:25:06.654 cpu : usr=0.46%, sys=2.59%, ctx=1670, majf=0, minf=4097 00:25:06.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:25:06.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.654 issued rwts: total=8724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.654 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.654 job10: (groupid=0, jobs=1): err= 0: pid=2016775: Sat Jul 13 08:11:56 2024 00:25:06.654 read: IOPS=470, BW=118MiB/s (123MB/s)(1182MiB/10044msec) 00:25:06.654 slat (usec): min=9, max=254519, avg=2053.95, stdev=8430.43 00:25:06.654 clat (msec): min=2, max=596, avg=133.76, stdev=69.03 00:25:06.654 lat (msec): min=2, max=596, avg=135.82, stdev=70.36 00:25:06.654 clat percentiles (msec): 00:25:06.654 | 1.00th=[ 44], 5.00th=[ 57], 10.00th=[ 65], 20.00th=[ 80], 00:25:06.654 | 30.00th=[ 91], 40.00th=[ 104], 50.00th=[ 114], 60.00th=[ 127], 00:25:06.654 | 70.00th=[ 163], 80.00th=[ 190], 90.00th=[ 222], 95.00th=[ 241], 00:25:06.654 | 99.00th=[ 380], 99.50th=[ 430], 99.90th=[ 498], 99.95th=[ 600], 00:25:06.654 | 99.99th=[ 600] 00:25:06.654 bw ( KiB/s): min=34304, max=224256, per=6.70%, avg=119441.70, stdev=52976.70, samples=20 00:25:06.654 iops : min= 134, max= 876, avg=466.55, stdev=206.96, samples=20 00:25:06.654 lat (msec) : 4=0.04%, 10=0.04%, 20=0.13%, 50=1.73%, 100=35.82% 00:25:06.654 lat (msec) : 250=58.17%, 500=3.98%, 750=0.08% 00:25:06.654 cpu : usr=0.30%, sys=1.65%, ctx=1038, majf=0, minf=4097 00:25:06.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:06.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:06.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:06.654 issued rwts: total=4729,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:06.654 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:06.654 00:25:06.654 Run status group 0 (all jobs): 00:25:06.654 READ: bw=1742MiB/s (1826MB/s), 116MiB/s-215MiB/s (122MB/s-225MB/s), io=17.3GiB (18.5GB), run=10015-10153msec 00:25:06.654 00:25:06.654 Disk stats (read/write): 00:25:06.654 nvme0n1: ios=16432/0, merge=0/0, ticks=1239560/0, in_queue=1239560, util=97.24% 00:25:06.654 nvme10n1: ios=14757/0, merge=0/0, ticks=1242126/0, in_queue=1242126, util=97.44% 00:25:06.654 nvme1n1: ios=13991/0, merge=0/0, ticks=1246938/0, in_queue=1246938, util=97.70% 00:25:06.654 nvme2n1: ios=11558/0, merge=0/0, ticks=1240720/0, in_queue=1240720, util=97.84% 00:25:06.654 nvme3n1: ios=9571/0, merge=0/0, ticks=1232758/0, in_queue=1232758, util=97.90% 00:25:06.654 nvme4n1: ios=9269/0, merge=0/0, ticks=1234379/0, in_queue=1234379, util=98.22% 00:25:06.654 nvme5n1: ios=13708/0, merge=0/0, ticks=1244205/0, in_queue=1244205, util=98.39% 00:25:06.654 nvme6n1: ios=11805/0, merge=0/0, ticks=1238308/0, in_queue=1238308, util=98.50% 00:25:06.654 nvme7n1: ios=11661/0, merge=0/0, ticks=1232599/0, in_queue=1232599, util=98.90% 00:25:06.654 nvme8n1: ios=17178/0, merge=0/0, ticks=1239838/0, in_queue=1239838, util=99.06% 00:25:06.654 nvme9n1: ios=9254/0, merge=0/0, ticks=1226423/0, in_queue=1226423, util=99.21% 00:25:06.654 08:11:56 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:06.654 [global] 00:25:06.654 thread=1 00:25:06.654 invalidate=1 00:25:06.654 rw=randwrite 00:25:06.654 time_based=1 00:25:06.654 runtime=10 00:25:06.654 ioengine=libaio 00:25:06.654 direct=1 00:25:06.654 bs=262144 00:25:06.654 iodepth=64 00:25:06.654 norandommap=1 00:25:06.654 numjobs=1 00:25:06.654 00:25:06.654 [job0] 00:25:06.654 filename=/dev/nvme0n1 00:25:06.654 [job1] 00:25:06.654 filename=/dev/nvme10n1 00:25:06.654 [job2] 00:25:06.654 filename=/dev/nvme1n1 00:25:06.654 [job3] 00:25:06.654 filename=/dev/nvme2n1 00:25:06.654 [job4] 00:25:06.654 filename=/dev/nvme3n1 00:25:06.654 [job5] 00:25:06.654 filename=/dev/nvme4n1 00:25:06.654 [job6] 00:25:06.654 filename=/dev/nvme5n1 00:25:06.654 [job7] 00:25:06.654 filename=/dev/nvme6n1 00:25:06.654 [job8] 00:25:06.654 filename=/dev/nvme7n1 00:25:06.654 [job9] 00:25:06.654 filename=/dev/nvme8n1 00:25:06.654 [job10] 00:25:06.654 filename=/dev/nvme9n1 00:25:06.654 Could not set queue depth (nvme0n1) 00:25:06.654 Could not set queue depth (nvme10n1) 00:25:06.654 Could not set queue depth (nvme1n1) 00:25:06.654 Could not set queue depth (nvme2n1) 00:25:06.654 Could not set queue depth (nvme3n1) 00:25:06.654 Could not set queue depth (nvme4n1) 00:25:06.654 Could not set queue depth (nvme5n1) 00:25:06.654 Could not set queue depth (nvme6n1) 00:25:06.654 Could not set queue depth (nvme7n1) 00:25:06.654 Could not set queue depth (nvme8n1) 00:25:06.654 Could not set queue depth (nvme9n1) 00:25:06.654 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:06.654 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:06.654 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:06.654 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:06.655 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:06.655 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:06.655 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:06.655 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:06.655 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:06.655 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:06.655 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:06.655 fio-3.35 00:25:06.655 Starting 11 threads 00:25:16.634 00:25:16.634 job0: (groupid=0, jobs=1): err= 0: pid=2017791: Sat Jul 13 08:12:07 2024 00:25:16.634 write: IOPS=526, BW=132MiB/s (138MB/s)(1333MiB/10122msec); 0 zone resets 00:25:16.634 slat (usec): min=25, max=80325, avg=1574.41, stdev=3957.94 00:25:16.634 clat (msec): min=2, max=636, avg=119.84, stdev=72.00 00:25:16.634 lat (msec): min=3, max=642, avg=121.41, stdev=72.78 00:25:16.634 clat percentiles (msec): 00:25:16.634 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 50], 20.00th=[ 74], 00:25:16.634 | 30.00th=[ 94], 40.00th=[ 105], 50.00th=[ 117], 60.00th=[ 124], 00:25:16.634 | 70.00th=[ 138], 80.00th=[ 148], 90.00th=[ 176], 95.00th=[ 224], 00:25:16.634 | 99.00th=[ 510], 99.50th=[ 584], 99.90th=[ 617], 99.95th=[ 625], 00:25:16.634 | 99.99th=[ 634] 00:25:16.634 bw ( KiB/s): min=63488, max=220672, per=9.19%, avg=134912.00, stdev=41184.56, samples=20 00:25:16.634 iops : min= 248, max= 862, avg=527.00, stdev=160.88, samples=20 00:25:16.635 lat (msec) : 4=0.06%, 10=1.20%, 20=2.89%, 50=6.04%, 100=25.95% 00:25:16.635 lat (msec) : 250=60.89%, 500=1.89%, 750=1.09% 00:25:16.635 cpu : usr=1.52%, sys=1.86%, ctx=2284, majf=0, minf=1 00:25:16.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:16.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.635 issued rwts: total=0,5333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.635 job1: (groupid=0, jobs=1): err= 0: pid=2017803: Sat Jul 13 08:12:07 2024 00:25:16.635 write: IOPS=501, BW=125MiB/s (132MB/s)(1268MiB/10106msec); 0 zone resets 00:25:16.635 slat (usec): min=17, max=112312, avg=1543.52, stdev=4273.20 00:25:16.635 clat (usec): min=1970, max=397307, avg=125907.06, stdev=66065.98 00:25:16.635 lat (msec): min=2, max=397, avg=127.45, stdev=66.89 00:25:16.635 clat percentiles (msec): 00:25:16.635 | 1.00th=[ 9], 5.00th=[ 24], 10.00th=[ 37], 20.00th=[ 56], 00:25:16.635 | 30.00th=[ 92], 40.00th=[ 118], 50.00th=[ 134], 60.00th=[ 146], 00:25:16.635 | 70.00th=[ 159], 80.00th=[ 176], 90.00th=[ 194], 95.00th=[ 226], 00:25:16.635 | 99.00th=[ 326], 99.50th=[ 372], 99.90th=[ 397], 99.95th=[ 397], 00:25:16.635 | 99.99th=[ 397] 00:25:16.635 bw ( KiB/s): min=49152, max=319488, per=8.73%, avg=128230.40, stdev=54979.08, samples=20 00:25:16.635 iops : min= 192, max= 1248, avg=500.90, stdev=214.76, samples=20 00:25:16.635 lat (msec) : 2=0.02%, 4=0.14%, 10=1.28%, 20=2.68%, 50=13.94% 00:25:16.635 lat (msec) : 100=14.02%, 250=65.20%, 500=2.72% 00:25:16.635 cpu : usr=1.42%, sys=1.73%, ctx=2462, majf=0, minf=1 00:25:16.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:16.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.635 issued rwts: total=0,5072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.635 job2: (groupid=0, jobs=1): err= 0: pid=2017819: Sat Jul 13 08:12:07 2024 00:25:16.635 write: IOPS=521, BW=130MiB/s (137MB/s)(1315MiB/10076msec); 0 zone resets 00:25:16.635 slat (usec): min=16, max=215509, avg=1484.53, stdev=5403.47 00:25:16.635 clat (usec): min=1762, max=607531, avg=121027.65, stdev=87976.37 00:25:16.635 lat (usec): min=1801, max=607575, avg=122512.18, stdev=89101.76 00:25:16.635 clat percentiles (msec): 00:25:16.635 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 29], 20.00th=[ 52], 00:25:16.635 | 30.00th=[ 77], 40.00th=[ 91], 50.00th=[ 107], 60.00th=[ 128], 00:25:16.635 | 70.00th=[ 146], 80.00th=[ 171], 90.00th=[ 207], 95.00th=[ 288], 00:25:16.635 | 99.00th=[ 542], 99.50th=[ 584], 99.90th=[ 609], 99.95th=[ 609], 00:25:16.635 | 99.99th=[ 609] 00:25:16.635 bw ( KiB/s): min=22528, max=248832, per=9.06%, avg=132987.65, stdev=57559.71, samples=20 00:25:16.635 iops : min= 88, max= 972, avg=519.40, stdev=224.89, samples=20 00:25:16.635 lat (msec) : 2=0.02%, 4=1.26%, 10=1.50%, 20=3.88%, 50=12.80% 00:25:16.635 lat (msec) : 100=26.13%, 250=47.83%, 500=5.53%, 750=1.05% 00:25:16.635 cpu : usr=1.69%, sys=1.67%, ctx=2744, majf=0, minf=1 00:25:16.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:16.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.635 issued rwts: total=0,5258,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.635 job3: (groupid=0, jobs=1): err= 0: pid=2017831: Sat Jul 13 08:12:07 2024 00:25:16.635 write: IOPS=557, BW=139MiB/s (146MB/s)(1417MiB/10161msec); 0 zone resets 00:25:16.635 slat (usec): min=18, max=88215, avg=1202.02, stdev=3904.02 00:25:16.635 clat (usec): min=1478, max=585116, avg=113493.61, stdev=76423.89 00:25:16.635 lat (usec): min=1567, max=585160, avg=114695.63, stdev=77337.66 00:25:16.635 clat percentiles (msec): 00:25:16.635 | 1.00th=[ 7], 5.00th=[ 18], 10.00th=[ 32], 20.00th=[ 59], 00:25:16.635 | 30.00th=[ 66], 40.00th=[ 81], 50.00th=[ 103], 60.00th=[ 126], 00:25:16.635 | 70.00th=[ 146], 80.00th=[ 163], 90.00th=[ 186], 95.00th=[ 232], 00:25:16.635 | 99.00th=[ 447], 99.50th=[ 550], 99.90th=[ 584], 99.95th=[ 584], 00:25:16.635 | 99.99th=[ 584] 00:25:16.635 bw ( KiB/s): min=26624, max=251392, per=9.76%, avg=143408.50, stdev=60466.61, samples=20 00:25:16.635 iops : min= 104, max= 982, avg=560.15, stdev=236.22, samples=20 00:25:16.635 lat (msec) : 2=0.07%, 4=0.26%, 10=1.57%, 20=3.99%, 50=10.15% 00:25:16.635 lat (msec) : 100=32.81%, 250=47.58%, 500=2.75%, 750=0.81% 00:25:16.635 cpu : usr=1.65%, sys=2.04%, ctx=3179, majf=0, minf=1 00:25:16.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:16.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.635 issued rwts: total=0,5666,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.635 job4: (groupid=0, jobs=1): err= 0: pid=2017838: Sat Jul 13 08:12:07 2024 00:25:16.635 write: IOPS=408, BW=102MiB/s (107MB/s)(1036MiB/10153msec); 0 zone resets 00:25:16.635 slat (usec): min=23, max=76906, avg=1793.49, stdev=5018.51 00:25:16.635 clat (msec): min=3, max=580, avg=154.83, stdev=81.47 00:25:16.635 lat (msec): min=3, max=584, avg=156.62, stdev=82.28 00:25:16.635 clat percentiles (msec): 00:25:16.635 | 1.00th=[ 11], 5.00th=[ 30], 10.00th=[ 59], 20.00th=[ 107], 00:25:16.635 | 30.00th=[ 120], 40.00th=[ 128], 50.00th=[ 142], 60.00th=[ 163], 00:25:16.635 | 70.00th=[ 186], 80.00th=[ 207], 90.00th=[ 232], 95.00th=[ 288], 00:25:16.635 | 99.00th=[ 518], 99.50th=[ 550], 99.90th=[ 567], 99.95th=[ 575], 00:25:16.635 | 99.99th=[ 584] 00:25:16.635 bw ( KiB/s): min=26624, max=159232, per=7.11%, avg=104466.60, stdev=31625.44, samples=20 00:25:16.635 iops : min= 104, max= 622, avg=408.00, stdev=123.55, samples=20 00:25:16.635 lat (msec) : 4=0.10%, 10=0.72%, 20=2.32%, 50=4.66%, 100=10.69% 00:25:16.635 lat (msec) : 250=73.17%, 500=7.29%, 750=1.06% 00:25:16.635 cpu : usr=1.28%, sys=1.37%, ctx=2011, majf=0, minf=1 00:25:16.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:25:16.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.635 issued rwts: total=0,4144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.635 job5: (groupid=0, jobs=1): err= 0: pid=2017878: Sat Jul 13 08:12:07 2024 00:25:16.635 write: IOPS=505, BW=126MiB/s (133MB/s)(1281MiB/10124msec); 0 zone resets 00:25:16.635 slat (usec): min=18, max=94116, avg=1393.61, stdev=4375.36 00:25:16.635 clat (usec): min=1873, max=421053, avg=124897.51, stdev=73968.24 00:25:16.635 lat (usec): min=1916, max=424138, avg=126291.11, stdev=74914.04 00:25:16.635 clat percentiles (msec): 00:25:16.635 | 1.00th=[ 7], 5.00th=[ 21], 10.00th=[ 42], 20.00th=[ 69], 00:25:16.635 | 30.00th=[ 91], 40.00th=[ 108], 50.00th=[ 115], 60.00th=[ 123], 00:25:16.635 | 70.00th=[ 142], 80.00th=[ 167], 90.00th=[ 232], 95.00th=[ 264], 00:25:16.635 | 99.00th=[ 372], 99.50th=[ 380], 99.90th=[ 409], 99.95th=[ 418], 00:25:16.635 | 99.99th=[ 422] 00:25:16.635 bw ( KiB/s): min=49152, max=245780, per=8.82%, avg=129484.10, stdev=47447.07, samples=20 00:25:16.635 iops : min= 192, max= 960, avg=505.75, stdev=185.35, samples=20 00:25:16.635 lat (msec) : 2=0.02%, 4=0.33%, 10=1.74%, 20=2.75%, 50=7.91% 00:25:16.635 lat (msec) : 100=22.73%, 250=57.58%, 500=6.95% 00:25:16.635 cpu : usr=1.60%, sys=1.62%, ctx=2785, majf=0, minf=1 00:25:16.635 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:16.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.635 issued rwts: total=0,5122,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.635 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.635 job6: (groupid=0, jobs=1): err= 0: pid=2017898: Sat Jul 13 08:12:07 2024 00:25:16.635 write: IOPS=650, BW=163MiB/s (171MB/s)(1652MiB/10151msec); 0 zone resets 00:25:16.635 slat (usec): min=22, max=140683, avg=1043.51, stdev=4561.69 00:25:16.635 clat (usec): min=1508, max=587535, avg=96886.88, stdev=78437.08 00:25:16.635 lat (usec): min=1547, max=602406, avg=97930.39, stdev=79178.76 00:25:16.635 clat percentiles (msec): 00:25:16.635 | 1.00th=[ 10], 5.00th=[ 23], 10.00th=[ 33], 20.00th=[ 44], 00:25:16.635 | 30.00th=[ 50], 40.00th=[ 55], 50.00th=[ 64], 60.00th=[ 87], 00:25:16.635 | 70.00th=[ 128], 80.00th=[ 148], 90.00th=[ 188], 95.00th=[ 234], 00:25:16.635 | 99.00th=[ 422], 99.50th=[ 535], 99.90th=[ 584], 99.95th=[ 584], 00:25:16.635 | 99.99th=[ 592] 00:25:16.635 bw ( KiB/s): min=33280, max=330752, per=11.40%, avg=167490.80, stdev=78848.87, samples=20 00:25:16.635 iops : min= 130, max= 1292, avg=654.25, stdev=308.01, samples=20 00:25:16.635 lat (msec) : 2=0.02%, 4=0.11%, 10=1.06%, 20=3.19%, 50=26.16% 00:25:16.636 lat (msec) : 100=32.73%, 250=32.53%, 500=3.44%, 750=0.77% 00:25:16.636 cpu : usr=1.82%, sys=2.25%, ctx=3467, majf=0, minf=1 00:25:16.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:16.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.636 issued rwts: total=0,6606,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.636 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.636 job7: (groupid=0, jobs=1): err= 0: pid=2017915: Sat Jul 13 08:12:07 2024 00:25:16.636 write: IOPS=539, BW=135MiB/s (141MB/s)(1354MiB/10043msec); 0 zone resets 00:25:16.636 slat (usec): min=26, max=92857, avg=1491.60, stdev=4299.44 00:25:16.636 clat (msec): min=4, max=577, avg=117.11, stdev=80.21 00:25:16.636 lat (msec): min=4, max=581, avg=118.60, stdev=81.30 00:25:16.636 clat percentiles (msec): 00:25:16.636 | 1.00th=[ 15], 5.00th=[ 37], 10.00th=[ 43], 20.00th=[ 50], 00:25:16.636 | 30.00th=[ 68], 40.00th=[ 94], 50.00th=[ 107], 60.00th=[ 116], 00:25:16.636 | 70.00th=[ 134], 80.00th=[ 155], 90.00th=[ 203], 95.00th=[ 271], 00:25:16.636 | 99.00th=[ 472], 99.50th=[ 550], 99.90th=[ 575], 99.95th=[ 575], 00:25:16.636 | 99.99th=[ 575] 00:25:16.636 bw ( KiB/s): min=26624, max=350720, per=9.33%, avg=136970.00, stdev=69858.67, samples=20 00:25:16.636 iops : min= 104, max= 1370, avg=535.00, stdev=272.87, samples=20 00:25:16.636 lat (msec) : 10=0.33%, 20=1.63%, 50=18.19%, 100=25.16%, 250=48.82% 00:25:16.636 lat (msec) : 500=4.95%, 750=0.92% 00:25:16.636 cpu : usr=1.62%, sys=2.08%, ctx=2216, majf=0, minf=1 00:25:16.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:16.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.636 issued rwts: total=0,5414,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.636 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.636 job8: (groupid=0, jobs=1): err= 0: pid=2017960: Sat Jul 13 08:12:07 2024 00:25:16.636 write: IOPS=518, BW=130MiB/s (136MB/s)(1307MiB/10072msec); 0 zone resets 00:25:16.636 slat (usec): min=18, max=129685, avg=1230.12, stdev=3885.13 00:25:16.636 clat (msec): min=3, max=437, avg=122.05, stdev=75.18 00:25:16.636 lat (msec): min=3, max=441, avg=123.28, stdev=75.87 00:25:16.636 clat percentiles (msec): 00:25:16.636 | 1.00th=[ 8], 5.00th=[ 18], 10.00th=[ 39], 20.00th=[ 48], 00:25:16.636 | 30.00th=[ 80], 40.00th=[ 95], 50.00th=[ 117], 60.00th=[ 142], 00:25:16.636 | 70.00th=[ 155], 80.00th=[ 174], 90.00th=[ 209], 95.00th=[ 251], 00:25:16.636 | 99.00th=[ 388], 99.50th=[ 414], 99.90th=[ 435], 99.95th=[ 439], 00:25:16.636 | 99.99th=[ 439] 00:25:16.636 bw ( KiB/s): min=76800, max=319488, per=9.00%, avg=132172.80, stdev=52342.56, samples=20 00:25:16.636 iops : min= 300, max= 1248, avg=516.30, stdev=204.46, samples=20 00:25:16.636 lat (msec) : 4=0.06%, 10=1.84%, 20=3.79%, 50=14.79%, 100=21.87% 00:25:16.636 lat (msec) : 250=52.58%, 500=5.07% 00:25:16.636 cpu : usr=1.53%, sys=1.84%, ctx=3086, majf=0, minf=1 00:25:16.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:16.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.636 issued rwts: total=0,5226,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.636 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.636 job9: (groupid=0, jobs=1): err= 0: pid=2017962: Sat Jul 13 08:12:07 2024 00:25:16.636 write: IOPS=449, BW=112MiB/s (118MB/s)(1142MiB/10158msec); 0 zone resets 00:25:16.636 slat (usec): min=16, max=107935, avg=1729.87, stdev=4703.70 00:25:16.636 clat (msec): min=2, max=584, avg=140.50, stdev=75.56 00:25:16.636 lat (msec): min=2, max=584, avg=142.23, stdev=76.39 00:25:16.636 clat percentiles (msec): 00:25:16.636 | 1.00th=[ 14], 5.00th=[ 39], 10.00th=[ 62], 20.00th=[ 89], 00:25:16.636 | 30.00th=[ 110], 40.00th=[ 124], 50.00th=[ 136], 60.00th=[ 144], 00:25:16.636 | 70.00th=[ 157], 80.00th=[ 174], 90.00th=[ 215], 95.00th=[ 275], 00:25:16.636 | 99.00th=[ 510], 99.50th=[ 550], 99.90th=[ 584], 99.95th=[ 584], 00:25:16.636 | 99.99th=[ 584] 00:25:16.636 bw ( KiB/s): min=24576, max=197632, per=7.85%, avg=115335.90, stdev=38307.37, samples=20 00:25:16.636 iops : min= 96, max= 772, avg=450.50, stdev=149.67, samples=20 00:25:16.636 lat (msec) : 4=0.07%, 10=0.53%, 20=1.42%, 50=4.84%, 100=17.60% 00:25:16.636 lat (msec) : 250=68.81%, 500=5.71%, 750=1.03% 00:25:16.636 cpu : usr=1.36%, sys=1.54%, ctx=2134, majf=0, minf=1 00:25:16.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:16.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.636 issued rwts: total=0,4569,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.636 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.636 job10: (groupid=0, jobs=1): err= 0: pid=2017963: Sat Jul 13 08:12:07 2024 00:25:16.636 write: IOPS=581, BW=145MiB/s (152MB/s)(1470MiB/10122msec); 0 zone resets 00:25:16.636 slat (usec): min=22, max=108379, avg=1025.49, stdev=3207.36 00:25:16.636 clat (usec): min=1438, max=357409, avg=109063.34, stdev=60550.56 00:25:16.636 lat (usec): min=1476, max=433169, avg=110088.83, stdev=61165.52 00:25:16.636 clat percentiles (msec): 00:25:16.636 | 1.00th=[ 8], 5.00th=[ 23], 10.00th=[ 37], 20.00th=[ 58], 00:25:16.636 | 30.00th=[ 74], 40.00th=[ 89], 50.00th=[ 104], 60.00th=[ 115], 00:25:16.636 | 70.00th=[ 136], 80.00th=[ 155], 90.00th=[ 180], 95.00th=[ 218], 00:25:16.636 | 99.00th=[ 292], 99.50th=[ 330], 99.90th=[ 355], 99.95th=[ 355], 00:25:16.636 | 99.99th=[ 359] 00:25:16.636 bw ( KiB/s): min=90112, max=222720, per=10.14%, avg=148940.80, stdev=40449.16, samples=20 00:25:16.636 iops : min= 352, max= 870, avg=581.80, stdev=158.00, samples=20 00:25:16.636 lat (msec) : 2=0.12%, 4=0.32%, 10=1.24%, 20=2.74%, 50=12.67% 00:25:16.636 lat (msec) : 100=30.27%, 250=49.33%, 500=3.32% 00:25:16.636 cpu : usr=1.97%, sys=2.08%, ctx=3751, majf=0, minf=1 00:25:16.636 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:16.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:16.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:16.636 issued rwts: total=0,5881,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:16.636 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:16.636 00:25:16.636 Run status group 0 (all jobs): 00:25:16.636 WRITE: bw=1434MiB/s (1504MB/s), 102MiB/s-163MiB/s (107MB/s-171MB/s), io=14.2GiB (15.3GB), run=10043-10161msec 00:25:16.636 00:25:16.636 Disk stats (read/write): 00:25:16.636 nvme0n1: ios=49/10452, merge=0/0, ticks=38/1208716, in_queue=1208754, util=97.00% 00:25:16.636 nvme10n1: ios=43/9915, merge=0/0, ticks=1421/1204718, in_queue=1206139, util=99.82% 00:25:16.636 nvme1n1: ios=44/10238, merge=0/0, ticks=1931/1215312, in_queue=1217243, util=100.00% 00:25:16.636 nvme2n1: ios=45/11314, merge=0/0, ticks=1147/1245586, in_queue=1246733, util=99.91% 00:25:16.636 nvme3n1: ios=43/8286, merge=0/0, ticks=1859/1239290, in_queue=1241149, util=100.00% 00:25:16.636 nvme4n1: ios=48/10039, merge=0/0, ticks=1343/1210489, in_queue=1211832, util=100.00% 00:25:16.636 nvme5n1: ios=45/13025, merge=0/0, ticks=3642/1186166, in_queue=1189808, util=100.00% 00:25:16.636 nvme6n1: ios=25/10417, merge=0/0, ticks=1000/1209127, in_queue=1210127, util=99.89% 00:25:16.636 nvme7n1: ios=0/10167, merge=0/0, ticks=0/1222220, in_queue=1222220, util=98.71% 00:25:16.636 nvme8n1: ios=41/9129, merge=0/0, ticks=885/1242370, in_queue=1243255, util=99.90% 00:25:16.636 nvme9n1: ios=0/11551, merge=0/0, ticks=0/1220666, in_queue=1220666, util=99.05% 00:25:16.636 08:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:16.636 08:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:16.636 08:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.636 08:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:16.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:16.636 08:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:16.636 08:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:16.637 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.637 08:12:07 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:16.637 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:16.637 08:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:16.637 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:16.637 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:16.637 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:25:16.637 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:16.637 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:25:16.637 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:16.637 08:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:16.637 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.637 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.637 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.637 08:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.637 08:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:16.895 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:16.895 08:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:16.895 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:16.895 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:16.895 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:25:16.895 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:16.895 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:25:16.895 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:16.895 08:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:16.895 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:16.895 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:16.895 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:16.895 08:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:16.895 08:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:17.206 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:17.206 08:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:17.206 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:17.206 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:17.206 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:25:17.206 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:17.206 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:25:17.206 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:17.206 08:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:17.206 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.206 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.206 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.206 08:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.206 08:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:17.206 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:17.463 08:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:17.463 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:17.463 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:17.463 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:25:17.463 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:17.463 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:25:17.463 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:17.463 08:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:17.463 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.463 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.463 08:12:08 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.463 08:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.463 08:12:08 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:17.463 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:17.463 08:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:17.463 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:17.463 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:17.463 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:25:17.721 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:17.721 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:25:17.721 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:17.721 08:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:17.721 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.721 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.721 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.721 08:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.721 08:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:17.721 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:17.721 08:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:17.721 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:17.721 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:17.721 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:25:17.721 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:17.721 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:25:17.721 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:17.721 08:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:17.721 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.721 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.978 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.978 08:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.978 08:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:17.978 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:17.978 08:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:17.978 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:17.978 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:17.978 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:25:17.978 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:17.978 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:25:17.978 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:17.978 08:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:17.978 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.978 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.978 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.978 08:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.978 08:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:17.978 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:17.978 08:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:17.978 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:17.978 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:17.979 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:25:17.979 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:17.979 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:25:17.979 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:17.979 08:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:17.979 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:17.979 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:17.979 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:17.979 08:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.979 08:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:18.236 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:18.236 rmmod nvme_tcp 00:25:18.236 rmmod nvme_fabrics 00:25:18.236 rmmod nvme_keyring 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 2012501 ']' 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 2012501 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 2012501 ']' 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 2012501 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2012501 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:18.236 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2012501' 00:25:18.236 killing process with pid 2012501 00:25:18.237 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 2012501 00:25:18.237 08:12:09 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 2012501 00:25:18.800 08:12:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:18.800 08:12:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:18.800 08:12:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:18.800 08:12:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:18.800 08:12:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:18.801 08:12:10 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.801 08:12:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:18.801 08:12:10 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.329 08:12:12 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:21.329 00:25:21.329 real 1m0.603s 00:25:21.329 user 3m21.538s 00:25:21.329 sys 0m24.722s 00:25:21.329 08:12:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:21.329 08:12:12 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:21.329 ************************************ 00:25:21.329 END TEST nvmf_multiconnection 00:25:21.329 ************************************ 00:25:21.329 08:12:12 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:21.329 08:12:12 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:21.329 08:12:12 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:21.329 08:12:12 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:21.329 08:12:12 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:21.329 ************************************ 00:25:21.329 START TEST nvmf_initiator_timeout 00:25:21.329 ************************************ 00:25:21.329 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:21.329 * Looking for test storage... 00:25:21.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:21.329 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.329 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:21.329 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.329 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.329 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.329 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.329 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.329 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.329 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.329 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.329 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.329 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.329 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:21.329 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:21.329 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.329 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.329 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:21.330 08:12:12 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:22.703 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:22.703 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:22.703 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:22.703 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:22.703 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:22.704 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:22.704 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:22.704 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:22.704 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:22.704 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:22.704 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:22.704 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:22.704 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:22.704 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:22.704 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:22.704 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:22.704 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:22.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:22.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:25:22.962 00:25:22.962 --- 10.0.0.2 ping statistics --- 00:25:22.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.962 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:22.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:22.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:25:22.962 00:25:22.962 --- 10.0.0.1 ping statistics --- 00:25:22.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:22.962 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=2021910 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 2021910 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 2021910 ']' 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:22.962 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:22.962 [2024-07-13 08:12:14.582515] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:25:22.962 [2024-07-13 08:12:14.582615] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.962 EAL: No free 2048 kB hugepages reported on node 1 00:25:22.962 [2024-07-13 08:12:14.652149] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:23.221 [2024-07-13 08:12:14.743661] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:23.221 [2024-07-13 08:12:14.743724] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:23.221 [2024-07-13 08:12:14.743750] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:23.221 [2024-07-13 08:12:14.743763] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:23.221 [2024-07-13 08:12:14.743783] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:23.221 [2024-07-13 08:12:14.743882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.221 [2024-07-13 08:12:14.743917] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:23.221 [2024-07-13 08:12:14.744033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:23.221 [2024-07-13 08:12:14.744036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:23.221 Malloc0 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:23.221 Delay0 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:23.221 [2024-07-13 08:12:14.921136] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:23.221 [2024-07-13 08:12:14.949445] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:23.221 08:12:14 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:24.153 08:12:15 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:24.153 08:12:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:25:24.153 08:12:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:24.153 08:12:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:24.153 08:12:15 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:25:26.048 08:12:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:26.048 08:12:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:26.048 08:12:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:26.048 08:12:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:26.048 08:12:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:26.048 08:12:17 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:25:26.048 08:12:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=2022217 00:25:26.048 08:12:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:26.048 08:12:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:26.048 [global] 00:25:26.048 thread=1 00:25:26.048 invalidate=1 00:25:26.048 rw=write 00:25:26.048 time_based=1 00:25:26.048 runtime=60 00:25:26.048 ioengine=libaio 00:25:26.048 direct=1 00:25:26.048 bs=4096 00:25:26.048 iodepth=1 00:25:26.048 norandommap=0 00:25:26.048 numjobs=1 00:25:26.048 00:25:26.048 verify_dump=1 00:25:26.048 verify_backlog=512 00:25:26.048 verify_state_save=0 00:25:26.048 do_verify=1 00:25:26.048 verify=crc32c-intel 00:25:26.048 [job0] 00:25:26.048 filename=/dev/nvme0n1 00:25:26.048 Could not set queue depth (nvme0n1) 00:25:26.305 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:26.305 fio-3.35 00:25:26.305 Starting 1 thread 00:25:29.583 08:12:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:29.583 08:12:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.584 08:12:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:29.584 true 00:25:29.584 08:12:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.584 08:12:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:29.584 08:12:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.584 08:12:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:29.584 true 00:25:29.584 08:12:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.584 08:12:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:29.584 08:12:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.584 08:12:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:29.584 true 00:25:29.584 08:12:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.584 08:12:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:29.584 08:12:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:29.584 08:12:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:29.584 true 00:25:29.584 08:12:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:29.584 08:12:20 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:32.108 08:12:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:32.108 08:12:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.108 08:12:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:32.108 true 00:25:32.108 08:12:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.108 08:12:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:32.108 08:12:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.108 08:12:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:32.108 true 00:25:32.108 08:12:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.108 08:12:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:32.108 08:12:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.108 08:12:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:32.108 true 00:25:32.108 08:12:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.108 08:12:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:32.108 08:12:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.108 08:12:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:32.108 true 00:25:32.108 08:12:23 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.108 08:12:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:32.108 08:12:23 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 2022217 00:26:28.339 00:26:28.339 job0: (groupid=0, jobs=1): err= 0: pid=2022406: Sat Jul 13 08:13:17 2024 00:26:28.339 read: IOPS=161, BW=645KiB/s (660kB/s)(37.8MiB/60032msec) 00:26:28.339 slat (usec): min=5, max=13855, avg=13.91, stdev=140.90 00:26:28.339 clat (usec): min=279, max=42166, avg=1661.95, stdev=7283.72 00:26:28.339 lat (usec): min=285, max=56022, avg=1675.86, stdev=7294.77 00:26:28.339 clat percentiles (usec): 00:26:28.339 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 302], 20.00th=[ 306], 00:26:28.339 | 30.00th=[ 310], 40.00th=[ 318], 50.00th=[ 322], 60.00th=[ 330], 00:26:28.339 | 70.00th=[ 343], 80.00th=[ 367], 90.00th=[ 494], 95.00th=[ 537], 00:26:28.339 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:26:28.339 | 99.99th=[42206] 00:26:28.339 write: IOPS=162, BW=648KiB/s (664kB/s)(38.0MiB/60032msec); 0 zone resets 00:26:28.339 slat (nsec): min=6519, max=79255, avg=16738.15, stdev=9773.59 00:26:28.339 clat (usec): min=198, max=41031k, avg=4479.05, stdev=416005.29 00:26:28.339 lat (usec): min=205, max=41031k, avg=4495.79, stdev=416005.20 00:26:28.339 clat percentiles (usec): 00:26:28.339 | 1.00th=[ 206], 5.00th=[ 212], 10.00th=[ 217], 00:26:28.339 | 20.00th=[ 221], 30.00th=[ 225], 40.00th=[ 231], 00:26:28.339 | 50.00th=[ 239], 60.00th=[ 260], 70.00th=[ 285], 00:26:28.339 | 80.00th=[ 302], 90.00th=[ 326], 95.00th=[ 363], 00:26:28.339 | 99.00th=[ 424], 99.50th=[ 445], 99.90th=[ 474], 00:26:28.339 | 99.95th=[ 529], 99.99th=[17112761] 00:26:28.339 bw ( KiB/s): min= 1864, max= 8192, per=100.00%, avg=5188.27, stdev=2047.27, samples=15 00:26:28.339 iops : min= 466, max= 2048, avg=1297.07, stdev=511.82, samples=15 00:26:28.339 lat (usec) : 250=28.48%, 500=66.65%, 750=3.22%, 1000=0.04% 00:26:28.339 lat (msec) : 2=0.02%, 4=0.01%, 50=1.58%, >=2000=0.01% 00:26:28.339 cpu : usr=0.34%, sys=0.59%, ctx=19407, majf=0, minf=2 00:26:28.339 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:28.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:28.339 issued rwts: total=9677,9728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:28.339 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:28.339 00:26:28.339 Run status group 0 (all jobs): 00:26:28.339 READ: bw=645KiB/s (660kB/s), 645KiB/s-645KiB/s (660kB/s-660kB/s), io=37.8MiB (39.6MB), run=60032-60032msec 00:26:28.339 WRITE: bw=648KiB/s (664kB/s), 648KiB/s-648KiB/s (664kB/s-664kB/s), io=38.0MiB (39.8MB), run=60032-60032msec 00:26:28.339 00:26:28.339 Disk stats (read/write): 00:26:28.339 nvme0n1: ios=9772/9728, merge=0/0, ticks=17071/2404, in_queue=19475, util=99.78% 00:26:28.339 08:13:17 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:28.339 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:28.339 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:28.339 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:26:28.339 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:28.339 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:28.339 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:28.339 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:28.339 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:26:28.339 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:28.339 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:28.339 nvmf hotplug test: fio successful as expected 00:26:28.339 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:28.339 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.339 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.339 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:28.340 rmmod nvme_tcp 00:26:28.340 rmmod nvme_fabrics 00:26:28.340 rmmod nvme_keyring 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 2021910 ']' 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 2021910 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 2021910 ']' 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 2021910 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2021910 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2021910' 00:26:28.340 killing process with pid 2021910 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 2021910 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 2021910 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:28.340 08:13:18 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.906 08:13:20 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:28.906 00:26:28.906 real 1m7.935s 00:26:28.906 user 4m10.570s 00:26:28.906 sys 0m6.787s 00:26:28.906 08:13:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:28.906 08:13:20 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:28.906 ************************************ 00:26:28.906 END TEST nvmf_initiator_timeout 00:26:28.906 ************************************ 00:26:28.906 08:13:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:28.906 08:13:20 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:28.906 08:13:20 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:28.906 08:13:20 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:28.906 08:13:20 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:28.906 08:13:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:30.808 08:13:22 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:30.808 08:13:22 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:30.808 08:13:22 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:30.808 08:13:22 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:30.808 08:13:22 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:30.808 08:13:22 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:30.808 08:13:22 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:30.808 08:13:22 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:30.808 08:13:22 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:30.808 08:13:22 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:30.808 08:13:22 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:30.808 08:13:22 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:30.808 08:13:22 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:30.808 08:13:22 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:30.808 08:13:22 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:30.808 08:13:22 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:30.808 08:13:22 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:30.808 08:13:22 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:30.808 08:13:22 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:30.808 08:13:22 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:30.809 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:30.809 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:30.809 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:30.809 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:30.809 08:13:22 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:30.809 08:13:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:30.809 08:13:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:30.809 08:13:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:30.809 ************************************ 00:26:30.809 START TEST nvmf_perf_adq 00:26:30.809 ************************************ 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:30.809 * Looking for test storage... 00:26:30.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:30.809 08:13:22 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:33.338 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:33.338 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:33.338 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:33.338 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:33.338 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:33.338 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:33.339 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:33.339 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:33.339 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:33.339 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:33.339 08:13:24 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:33.597 08:13:25 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:35.498 08:13:27 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:40.767 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:40.767 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.767 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:40.768 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:40.768 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:40.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:26:40.768 00:26:40.768 --- 10.0.0.2 ping statistics --- 00:26:40.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.768 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:26:40.768 00:26:40.768 --- 10.0.0.1 ping statistics --- 00:26:40.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.768 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2033922 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2033922 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2033922 ']' 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:40.768 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:40.768 [2024-07-13 08:13:32.324859] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:26:40.768 [2024-07-13 08:13:32.324943] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:40.768 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.768 [2024-07-13 08:13:32.392945] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:40.768 [2024-07-13 08:13:32.484421] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:40.768 [2024-07-13 08:13:32.484483] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:40.768 [2024-07-13 08:13:32.484518] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:40.768 [2024-07-13 08:13:32.484533] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:40.768 [2024-07-13 08:13:32.484545] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:40.768 [2024-07-13 08:13:32.484628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.768 [2024-07-13 08:13:32.484696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:40.768 [2024-07-13 08:13:32.484790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:40.768 [2024-07-13 08:13:32.484792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.025 [2024-07-13 08:13:32.722927] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.025 Malloc1 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.025 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.283 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.283 08:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:41.283 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.283 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.283 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.283 08:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:41.283 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:41.283 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:41.283 [2024-07-13 08:13:32.776466] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.283 08:13:32 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:41.283 08:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2033953 00:26:41.283 08:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:26:41.283 08:13:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:41.283 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.182 08:13:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:26:43.182 08:13:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:43.182 08:13:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:43.182 08:13:34 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:43.182 08:13:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:26:43.182 "tick_rate": 2700000000, 00:26:43.182 "poll_groups": [ 00:26:43.182 { 00:26:43.182 "name": "nvmf_tgt_poll_group_000", 00:26:43.182 "admin_qpairs": 1, 00:26:43.182 "io_qpairs": 1, 00:26:43.182 "current_admin_qpairs": 1, 00:26:43.182 "current_io_qpairs": 1, 00:26:43.182 "pending_bdev_io": 0, 00:26:43.182 "completed_nvme_io": 18002, 00:26:43.182 "transports": [ 00:26:43.182 { 00:26:43.182 "trtype": "TCP" 00:26:43.182 } 00:26:43.182 ] 00:26:43.182 }, 00:26:43.182 { 00:26:43.182 "name": "nvmf_tgt_poll_group_001", 00:26:43.182 "admin_qpairs": 0, 00:26:43.182 "io_qpairs": 1, 00:26:43.182 "current_admin_qpairs": 0, 00:26:43.182 "current_io_qpairs": 1, 00:26:43.182 "pending_bdev_io": 0, 00:26:43.182 "completed_nvme_io": 19393, 00:26:43.182 "transports": [ 00:26:43.182 { 00:26:43.182 "trtype": "TCP" 00:26:43.182 } 00:26:43.182 ] 00:26:43.182 }, 00:26:43.182 { 00:26:43.182 "name": "nvmf_tgt_poll_group_002", 00:26:43.182 "admin_qpairs": 0, 00:26:43.182 "io_qpairs": 1, 00:26:43.182 "current_admin_qpairs": 0, 00:26:43.182 "current_io_qpairs": 1, 00:26:43.182 "pending_bdev_io": 0, 00:26:43.182 "completed_nvme_io": 19666, 00:26:43.182 "transports": [ 00:26:43.182 { 00:26:43.182 "trtype": "TCP" 00:26:43.182 } 00:26:43.182 ] 00:26:43.182 }, 00:26:43.182 { 00:26:43.182 "name": "nvmf_tgt_poll_group_003", 00:26:43.182 "admin_qpairs": 0, 00:26:43.182 "io_qpairs": 1, 00:26:43.182 "current_admin_qpairs": 0, 00:26:43.182 "current_io_qpairs": 1, 00:26:43.182 "pending_bdev_io": 0, 00:26:43.182 "completed_nvme_io": 18373, 00:26:43.182 "transports": [ 00:26:43.182 { 00:26:43.182 "trtype": "TCP" 00:26:43.182 } 00:26:43.182 ] 00:26:43.182 } 00:26:43.182 ] 00:26:43.182 }' 00:26:43.182 08:13:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:26:43.182 08:13:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:26:43.182 08:13:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:26:43.182 08:13:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:26:43.182 08:13:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2033953 00:26:51.320 Initializing NVMe Controllers 00:26:51.320 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:51.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:26:51.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:26:51.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:26:51.320 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:26:51.320 Initialization complete. Launching workers. 00:26:51.320 ======================================================== 00:26:51.320 Latency(us) 00:26:51.320 Device Information : IOPS MiB/s Average min max 00:26:51.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10302.00 40.24 6214.30 2644.74 8396.49 00:26:51.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10864.20 42.44 5890.77 1760.89 7854.47 00:26:51.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 11001.60 42.97 5818.08 4888.62 7369.72 00:26:51.320 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10107.70 39.48 6333.09 2900.40 8596.77 00:26:51.320 ======================================================== 00:26:51.320 Total : 42275.50 165.14 6056.45 1760.89 8596.77 00:26:51.320 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:51.582 rmmod nvme_tcp 00:26:51.582 rmmod nvme_fabrics 00:26:51.582 rmmod nvme_keyring 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2033922 ']' 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2033922 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2033922 ']' 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2033922 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2033922 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2033922' 00:26:51.582 killing process with pid 2033922 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2033922 00:26:51.582 08:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2033922 00:26:51.840 08:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:51.840 08:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:51.840 08:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:51.840 08:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:51.840 08:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:51.840 08:13:43 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.840 08:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:51.840 08:13:43 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.743 08:13:45 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:53.743 08:13:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:26:53.743 08:13:45 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:54.680 08:13:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:56.588 08:13:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:01.866 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:01.866 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:01.866 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:01.866 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:01.866 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:01.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:01.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:27:01.867 00:27:01.867 --- 10.0.0.2 ping statistics --- 00:27:01.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.867 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:01.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:01.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:27:01.867 00:27:01.867 --- 10.0.0.1 ping statistics --- 00:27:01.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.867 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:01.867 net.core.busy_poll = 1 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:01.867 net.core.busy_read = 1 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2036573 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2036573 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 2036573 ']' 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:01.867 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:01.867 [2024-07-13 08:13:53.411523] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:01.867 [2024-07-13 08:13:53.411615] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.867 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.867 [2024-07-13 08:13:53.475528] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:01.867 [2024-07-13 08:13:53.560172] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:01.867 [2024-07-13 08:13:53.560226] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:01.867 [2024-07-13 08:13:53.560255] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:01.867 [2024-07-13 08:13:53.560266] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:01.867 [2024-07-13 08:13:53.560275] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:01.867 [2024-07-13 08:13:53.560356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.867 [2024-07-13 08:13:53.560423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:01.867 [2024-07-13 08:13:53.560487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:01.867 [2024-07-13 08:13:53.560490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:02.126 [2024-07-13 08:13:53.798793] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:02.126 Malloc1 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:02.126 [2024-07-13 08:13:53.851132] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2036616 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:02.126 08:13:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:02.385 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.289 08:13:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:04.289 08:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.289 08:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:04.289 08:13:55 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.289 08:13:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:04.289 "tick_rate": 2700000000, 00:27:04.289 "poll_groups": [ 00:27:04.289 { 00:27:04.289 "name": "nvmf_tgt_poll_group_000", 00:27:04.289 "admin_qpairs": 1, 00:27:04.289 "io_qpairs": 1, 00:27:04.289 "current_admin_qpairs": 1, 00:27:04.289 "current_io_qpairs": 1, 00:27:04.289 "pending_bdev_io": 0, 00:27:04.289 "completed_nvme_io": 24455, 00:27:04.289 "transports": [ 00:27:04.289 { 00:27:04.289 "trtype": "TCP" 00:27:04.289 } 00:27:04.289 ] 00:27:04.289 }, 00:27:04.289 { 00:27:04.289 "name": "nvmf_tgt_poll_group_001", 00:27:04.289 "admin_qpairs": 0, 00:27:04.289 "io_qpairs": 3, 00:27:04.289 "current_admin_qpairs": 0, 00:27:04.289 "current_io_qpairs": 3, 00:27:04.289 "pending_bdev_io": 0, 00:27:04.289 "completed_nvme_io": 25675, 00:27:04.289 "transports": [ 00:27:04.289 { 00:27:04.289 "trtype": "TCP" 00:27:04.289 } 00:27:04.289 ] 00:27:04.289 }, 00:27:04.289 { 00:27:04.289 "name": "nvmf_tgt_poll_group_002", 00:27:04.289 "admin_qpairs": 0, 00:27:04.289 "io_qpairs": 0, 00:27:04.289 "current_admin_qpairs": 0, 00:27:04.289 "current_io_qpairs": 0, 00:27:04.289 "pending_bdev_io": 0, 00:27:04.289 "completed_nvme_io": 0, 00:27:04.289 "transports": [ 00:27:04.289 { 00:27:04.289 "trtype": "TCP" 00:27:04.289 } 00:27:04.289 ] 00:27:04.289 }, 00:27:04.289 { 00:27:04.289 "name": "nvmf_tgt_poll_group_003", 00:27:04.289 "admin_qpairs": 0, 00:27:04.289 "io_qpairs": 0, 00:27:04.289 "current_admin_qpairs": 0, 00:27:04.289 "current_io_qpairs": 0, 00:27:04.289 "pending_bdev_io": 0, 00:27:04.289 "completed_nvme_io": 0, 00:27:04.289 "transports": [ 00:27:04.289 { 00:27:04.289 "trtype": "TCP" 00:27:04.289 } 00:27:04.289 ] 00:27:04.289 } 00:27:04.289 ] 00:27:04.289 }' 00:27:04.289 08:13:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:04.289 08:13:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:04.289 08:13:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:04.289 08:13:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:04.289 08:13:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2036616 00:27:12.416 Initializing NVMe Controllers 00:27:12.416 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:12.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:12.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:12.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:12.416 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:12.416 Initialization complete. Launching workers. 00:27:12.416 ======================================================== 00:27:12.416 Latency(us) 00:27:12.416 Device Information : IOPS MiB/s Average min max 00:27:12.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4687.80 18.31 13695.32 2181.68 60313.40 00:27:12.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 13449.90 52.54 4758.39 1543.38 8252.08 00:27:12.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4830.30 18.87 13249.73 2178.16 62035.70 00:27:12.416 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4654.20 18.18 13752.56 2374.18 61157.94 00:27:12.416 ======================================================== 00:27:12.416 Total : 27622.19 107.90 9275.44 1543.38 62035.70 00:27:12.416 00:27:12.416 08:14:04 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:12.416 08:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:12.416 08:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:12.416 08:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:12.416 08:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:12.416 08:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:12.416 08:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:12.416 rmmod nvme_tcp 00:27:12.416 rmmod nvme_fabrics 00:27:12.416 rmmod nvme_keyring 00:27:12.676 08:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:12.676 08:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:12.676 08:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:12.676 08:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2036573 ']' 00:27:12.676 08:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2036573 00:27:12.676 08:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 2036573 ']' 00:27:12.676 08:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 2036573 00:27:12.676 08:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:27:12.676 08:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:12.676 08:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2036573 00:27:12.676 08:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:12.676 08:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:12.676 08:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2036573' 00:27:12.676 killing process with pid 2036573 00:27:12.676 08:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 2036573 00:27:12.676 08:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 2036573 00:27:12.935 08:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:12.935 08:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:12.935 08:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:12.935 08:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:12.935 08:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:12.935 08:14:04 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:12.935 08:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:12.935 08:14:04 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.269 08:14:07 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:16.269 08:14:07 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:16.269 00:27:16.269 real 0m45.047s 00:27:16.269 user 2m40.310s 00:27:16.269 sys 0m9.640s 00:27:16.269 08:14:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:16.269 08:14:07 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:16.269 ************************************ 00:27:16.269 END TEST nvmf_perf_adq 00:27:16.269 ************************************ 00:27:16.269 08:14:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:16.269 08:14:07 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:16.269 08:14:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:16.269 08:14:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:16.269 08:14:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:16.269 ************************************ 00:27:16.269 START TEST nvmf_shutdown 00:27:16.269 ************************************ 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:16.269 * Looking for test storage... 00:27:16.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.269 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:16.270 ************************************ 00:27:16.270 START TEST nvmf_shutdown_tc1 00:27:16.270 ************************************ 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:16.270 08:14:07 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:18.183 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:18.183 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:18.183 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:18.184 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:18.184 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:18.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:27:18.184 00:27:18.184 --- 10.0.0.2 ping statistics --- 00:27:18.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.184 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:18.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:27:18.184 00:27:18.184 --- 10.0.0.1 ping statistics --- 00:27:18.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.184 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2039885 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2039885 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2039885 ']' 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:18.184 08:14:09 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:18.184 [2024-07-13 08:14:09.728279] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:18.184 [2024-07-13 08:14:09.728365] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.184 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.184 [2024-07-13 08:14:09.798292] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:18.184 [2024-07-13 08:14:09.892879] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.184 [2024-07-13 08:14:09.892948] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.184 [2024-07-13 08:14:09.892963] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.184 [2024-07-13 08:14:09.892975] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.184 [2024-07-13 08:14:09.892985] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.184 [2024-07-13 08:14:09.893051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:18.184 [2024-07-13 08:14:09.893085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:18.184 [2024-07-13 08:14:09.893114] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:18.184 [2024-07-13 08:14:09.893116] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:18.445 [2024-07-13 08:14:10.048841] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:18.445 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:18.445 Malloc1 00:27:18.445 [2024-07-13 08:14:10.138428] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:18.445 Malloc2 00:27:18.705 Malloc3 00:27:18.705 Malloc4 00:27:18.705 Malloc5 00:27:18.705 Malloc6 00:27:18.705 Malloc7 00:27:18.965 Malloc8 00:27:18.965 Malloc9 00:27:18.965 Malloc10 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2040068 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2040068 /var/tmp/bdevperf.sock 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 2040068 ']' 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:18.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.965 { 00:27:18.965 "params": { 00:27:18.965 "name": "Nvme$subsystem", 00:27:18.965 "trtype": "$TEST_TRANSPORT", 00:27:18.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.965 "adrfam": "ipv4", 00:27:18.965 "trsvcid": "$NVMF_PORT", 00:27:18.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.965 "hdgst": ${hdgst:-false}, 00:27:18.965 "ddgst": ${ddgst:-false} 00:27:18.965 }, 00:27:18.965 "method": "bdev_nvme_attach_controller" 00:27:18.965 } 00:27:18.965 EOF 00:27:18.965 )") 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.965 { 00:27:18.965 "params": { 00:27:18.965 "name": "Nvme$subsystem", 00:27:18.965 "trtype": "$TEST_TRANSPORT", 00:27:18.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.965 "adrfam": "ipv4", 00:27:18.965 "trsvcid": "$NVMF_PORT", 00:27:18.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.965 "hdgst": ${hdgst:-false}, 00:27:18.965 "ddgst": ${ddgst:-false} 00:27:18.965 }, 00:27:18.965 "method": "bdev_nvme_attach_controller" 00:27:18.965 } 00:27:18.965 EOF 00:27:18.965 )") 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.965 { 00:27:18.965 "params": { 00:27:18.965 "name": "Nvme$subsystem", 00:27:18.965 "trtype": "$TEST_TRANSPORT", 00:27:18.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.965 "adrfam": "ipv4", 00:27:18.965 "trsvcid": "$NVMF_PORT", 00:27:18.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.965 "hdgst": ${hdgst:-false}, 00:27:18.965 "ddgst": ${ddgst:-false} 00:27:18.965 }, 00:27:18.965 "method": "bdev_nvme_attach_controller" 00:27:18.965 } 00:27:18.965 EOF 00:27:18.965 )") 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.965 { 00:27:18.965 "params": { 00:27:18.965 "name": "Nvme$subsystem", 00:27:18.965 "trtype": "$TEST_TRANSPORT", 00:27:18.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.965 "adrfam": "ipv4", 00:27:18.965 "trsvcid": "$NVMF_PORT", 00:27:18.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.965 "hdgst": ${hdgst:-false}, 00:27:18.965 "ddgst": ${ddgst:-false} 00:27:18.965 }, 00:27:18.965 "method": "bdev_nvme_attach_controller" 00:27:18.965 } 00:27:18.965 EOF 00:27:18.965 )") 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.965 { 00:27:18.965 "params": { 00:27:18.965 "name": "Nvme$subsystem", 00:27:18.965 "trtype": "$TEST_TRANSPORT", 00:27:18.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.965 "adrfam": "ipv4", 00:27:18.965 "trsvcid": "$NVMF_PORT", 00:27:18.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.965 "hdgst": ${hdgst:-false}, 00:27:18.965 "ddgst": ${ddgst:-false} 00:27:18.965 }, 00:27:18.965 "method": "bdev_nvme_attach_controller" 00:27:18.965 } 00:27:18.965 EOF 00:27:18.965 )") 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.965 { 00:27:18.965 "params": { 00:27:18.965 "name": "Nvme$subsystem", 00:27:18.965 "trtype": "$TEST_TRANSPORT", 00:27:18.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.965 "adrfam": "ipv4", 00:27:18.965 "trsvcid": "$NVMF_PORT", 00:27:18.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.965 "hdgst": ${hdgst:-false}, 00:27:18.965 "ddgst": ${ddgst:-false} 00:27:18.965 }, 00:27:18.965 "method": "bdev_nvme_attach_controller" 00:27:18.965 } 00:27:18.965 EOF 00:27:18.965 )") 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.965 { 00:27:18.965 "params": { 00:27:18.965 "name": "Nvme$subsystem", 00:27:18.965 "trtype": "$TEST_TRANSPORT", 00:27:18.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.965 "adrfam": "ipv4", 00:27:18.965 "trsvcid": "$NVMF_PORT", 00:27:18.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.965 "hdgst": ${hdgst:-false}, 00:27:18.965 "ddgst": ${ddgst:-false} 00:27:18.965 }, 00:27:18.965 "method": "bdev_nvme_attach_controller" 00:27:18.965 } 00:27:18.965 EOF 00:27:18.965 )") 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.965 { 00:27:18.965 "params": { 00:27:18.965 "name": "Nvme$subsystem", 00:27:18.965 "trtype": "$TEST_TRANSPORT", 00:27:18.965 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.965 "adrfam": "ipv4", 00:27:18.965 "trsvcid": "$NVMF_PORT", 00:27:18.965 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.965 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.965 "hdgst": ${hdgst:-false}, 00:27:18.965 "ddgst": ${ddgst:-false} 00:27:18.965 }, 00:27:18.965 "method": "bdev_nvme_attach_controller" 00:27:18.965 } 00:27:18.965 EOF 00:27:18.965 )") 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.965 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.965 { 00:27:18.965 "params": { 00:27:18.965 "name": "Nvme$subsystem", 00:27:18.966 "trtype": "$TEST_TRANSPORT", 00:27:18.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.966 "adrfam": "ipv4", 00:27:18.966 "trsvcid": "$NVMF_PORT", 00:27:18.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.966 "hdgst": ${hdgst:-false}, 00:27:18.966 "ddgst": ${ddgst:-false} 00:27:18.966 }, 00:27:18.966 "method": "bdev_nvme_attach_controller" 00:27:18.966 } 00:27:18.966 EOF 00:27:18.966 )") 00:27:18.966 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:18.966 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:18.966 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:18.966 { 00:27:18.966 "params": { 00:27:18.966 "name": "Nvme$subsystem", 00:27:18.966 "trtype": "$TEST_TRANSPORT", 00:27:18.966 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:18.966 "adrfam": "ipv4", 00:27:18.966 "trsvcid": "$NVMF_PORT", 00:27:18.966 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:18.966 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:18.966 "hdgst": ${hdgst:-false}, 00:27:18.966 "ddgst": ${ddgst:-false} 00:27:18.966 }, 00:27:18.966 "method": "bdev_nvme_attach_controller" 00:27:18.966 } 00:27:18.966 EOF 00:27:18.966 )") 00:27:18.966 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:18.966 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:18.966 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:18.966 08:14:10 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:18.966 "params": { 00:27:18.966 "name": "Nvme1", 00:27:18.966 "trtype": "tcp", 00:27:18.966 "traddr": "10.0.0.2", 00:27:18.966 "adrfam": "ipv4", 00:27:18.966 "trsvcid": "4420", 00:27:18.966 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:18.966 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:18.966 "hdgst": false, 00:27:18.966 "ddgst": false 00:27:18.966 }, 00:27:18.966 "method": "bdev_nvme_attach_controller" 00:27:18.966 },{ 00:27:18.966 "params": { 00:27:18.966 "name": "Nvme2", 00:27:18.966 "trtype": "tcp", 00:27:18.966 "traddr": "10.0.0.2", 00:27:18.966 "adrfam": "ipv4", 00:27:18.966 "trsvcid": "4420", 00:27:18.966 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:18.966 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:18.966 "hdgst": false, 00:27:18.966 "ddgst": false 00:27:18.966 }, 00:27:18.966 "method": "bdev_nvme_attach_controller" 00:27:18.966 },{ 00:27:18.966 "params": { 00:27:18.966 "name": "Nvme3", 00:27:18.966 "trtype": "tcp", 00:27:18.966 "traddr": "10.0.0.2", 00:27:18.966 "adrfam": "ipv4", 00:27:18.966 "trsvcid": "4420", 00:27:18.966 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:18.966 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:18.966 "hdgst": false, 00:27:18.966 "ddgst": false 00:27:18.966 }, 00:27:18.966 "method": "bdev_nvme_attach_controller" 00:27:18.966 },{ 00:27:18.966 "params": { 00:27:18.966 "name": "Nvme4", 00:27:18.966 "trtype": "tcp", 00:27:18.966 "traddr": "10.0.0.2", 00:27:18.966 "adrfam": "ipv4", 00:27:18.966 "trsvcid": "4420", 00:27:18.966 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:18.966 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:18.966 "hdgst": false, 00:27:18.966 "ddgst": false 00:27:18.966 }, 00:27:18.966 "method": "bdev_nvme_attach_controller" 00:27:18.966 },{ 00:27:18.966 "params": { 00:27:18.966 "name": "Nvme5", 00:27:18.966 "trtype": "tcp", 00:27:18.966 "traddr": "10.0.0.2", 00:27:18.966 "adrfam": "ipv4", 00:27:18.966 "trsvcid": "4420", 00:27:18.966 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:18.966 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:18.966 "hdgst": false, 00:27:18.966 "ddgst": false 00:27:18.966 }, 00:27:18.966 "method": "bdev_nvme_attach_controller" 00:27:18.966 },{ 00:27:18.966 "params": { 00:27:18.966 "name": "Nvme6", 00:27:18.966 "trtype": "tcp", 00:27:18.966 "traddr": "10.0.0.2", 00:27:18.966 "adrfam": "ipv4", 00:27:18.966 "trsvcid": "4420", 00:27:18.966 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:18.966 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:18.966 "hdgst": false, 00:27:18.966 "ddgst": false 00:27:18.966 }, 00:27:18.966 "method": "bdev_nvme_attach_controller" 00:27:18.966 },{ 00:27:18.966 "params": { 00:27:18.966 "name": "Nvme7", 00:27:18.966 "trtype": "tcp", 00:27:18.966 "traddr": "10.0.0.2", 00:27:18.966 "adrfam": "ipv4", 00:27:18.966 "trsvcid": "4420", 00:27:18.966 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:18.966 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:18.966 "hdgst": false, 00:27:18.966 "ddgst": false 00:27:18.966 }, 00:27:18.966 "method": "bdev_nvme_attach_controller" 00:27:18.966 },{ 00:27:18.966 "params": { 00:27:18.966 "name": "Nvme8", 00:27:18.966 "trtype": "tcp", 00:27:18.966 "traddr": "10.0.0.2", 00:27:18.966 "adrfam": "ipv4", 00:27:18.966 "trsvcid": "4420", 00:27:18.966 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:18.966 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:18.966 "hdgst": false, 00:27:18.966 "ddgst": false 00:27:18.966 }, 00:27:18.966 "method": "bdev_nvme_attach_controller" 00:27:18.966 },{ 00:27:18.966 "params": { 00:27:18.966 "name": "Nvme9", 00:27:18.966 "trtype": "tcp", 00:27:18.966 "traddr": "10.0.0.2", 00:27:18.966 "adrfam": "ipv4", 00:27:18.966 "trsvcid": "4420", 00:27:18.966 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:18.966 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:18.966 "hdgst": false, 00:27:18.966 "ddgst": false 00:27:18.966 }, 00:27:18.966 "method": "bdev_nvme_attach_controller" 00:27:18.966 },{ 00:27:18.966 "params": { 00:27:18.966 "name": "Nvme10", 00:27:18.966 "trtype": "tcp", 00:27:18.966 "traddr": "10.0.0.2", 00:27:18.966 "adrfam": "ipv4", 00:27:18.966 "trsvcid": "4420", 00:27:18.966 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:18.966 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:18.966 "hdgst": false, 00:27:18.966 "ddgst": false 00:27:18.966 }, 00:27:18.966 "method": "bdev_nvme_attach_controller" 00:27:18.966 }' 00:27:18.966 [2024-07-13 08:14:10.642684] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:18.966 [2024-07-13 08:14:10.642758] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:18.966 EAL: No free 2048 kB hugepages reported on node 1 00:27:19.225 [2024-07-13 08:14:10.707403] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:19.225 [2024-07-13 08:14:10.794614] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:21.132 08:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:21.132 08:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:21.132 08:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:21.132 08:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.132 08:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:21.132 08:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.132 08:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2040068 00:27:21.132 08:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:21.132 08:14:12 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:22.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2040068 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:22.067 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2039885 00:27:22.067 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:22.067 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:22.067 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:22.067 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:22.067 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.067 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.067 { 00:27:22.067 "params": { 00:27:22.067 "name": "Nvme$subsystem", 00:27:22.067 "trtype": "$TEST_TRANSPORT", 00:27:22.067 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.067 "adrfam": "ipv4", 00:27:22.067 "trsvcid": "$NVMF_PORT", 00:27:22.067 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.067 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.067 "hdgst": ${hdgst:-false}, 00:27:22.067 "ddgst": ${ddgst:-false} 00:27:22.067 }, 00:27:22.067 "method": "bdev_nvme_attach_controller" 00:27:22.067 } 00:27:22.067 EOF 00:27:22.067 )") 00:27:22.067 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:22.067 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.067 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.067 { 00:27:22.068 "params": { 00:27:22.068 "name": "Nvme$subsystem", 00:27:22.068 "trtype": "$TEST_TRANSPORT", 00:27:22.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.068 "adrfam": "ipv4", 00:27:22.068 "trsvcid": "$NVMF_PORT", 00:27:22.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.068 "hdgst": ${hdgst:-false}, 00:27:22.068 "ddgst": ${ddgst:-false} 00:27:22.068 }, 00:27:22.068 "method": "bdev_nvme_attach_controller" 00:27:22.068 } 00:27:22.068 EOF 00:27:22.068 )") 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.068 { 00:27:22.068 "params": { 00:27:22.068 "name": "Nvme$subsystem", 00:27:22.068 "trtype": "$TEST_TRANSPORT", 00:27:22.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.068 "adrfam": "ipv4", 00:27:22.068 "trsvcid": "$NVMF_PORT", 00:27:22.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.068 "hdgst": ${hdgst:-false}, 00:27:22.068 "ddgst": ${ddgst:-false} 00:27:22.068 }, 00:27:22.068 "method": "bdev_nvme_attach_controller" 00:27:22.068 } 00:27:22.068 EOF 00:27:22.068 )") 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.068 { 00:27:22.068 "params": { 00:27:22.068 "name": "Nvme$subsystem", 00:27:22.068 "trtype": "$TEST_TRANSPORT", 00:27:22.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.068 "adrfam": "ipv4", 00:27:22.068 "trsvcid": "$NVMF_PORT", 00:27:22.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.068 "hdgst": ${hdgst:-false}, 00:27:22.068 "ddgst": ${ddgst:-false} 00:27:22.068 }, 00:27:22.068 "method": "bdev_nvme_attach_controller" 00:27:22.068 } 00:27:22.068 EOF 00:27:22.068 )") 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.068 { 00:27:22.068 "params": { 00:27:22.068 "name": "Nvme$subsystem", 00:27:22.068 "trtype": "$TEST_TRANSPORT", 00:27:22.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.068 "adrfam": "ipv4", 00:27:22.068 "trsvcid": "$NVMF_PORT", 00:27:22.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.068 "hdgst": ${hdgst:-false}, 00:27:22.068 "ddgst": ${ddgst:-false} 00:27:22.068 }, 00:27:22.068 "method": "bdev_nvme_attach_controller" 00:27:22.068 } 00:27:22.068 EOF 00:27:22.068 )") 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.068 { 00:27:22.068 "params": { 00:27:22.068 "name": "Nvme$subsystem", 00:27:22.068 "trtype": "$TEST_TRANSPORT", 00:27:22.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.068 "adrfam": "ipv4", 00:27:22.068 "trsvcid": "$NVMF_PORT", 00:27:22.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.068 "hdgst": ${hdgst:-false}, 00:27:22.068 "ddgst": ${ddgst:-false} 00:27:22.068 }, 00:27:22.068 "method": "bdev_nvme_attach_controller" 00:27:22.068 } 00:27:22.068 EOF 00:27:22.068 )") 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.068 { 00:27:22.068 "params": { 00:27:22.068 "name": "Nvme$subsystem", 00:27:22.068 "trtype": "$TEST_TRANSPORT", 00:27:22.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.068 "adrfam": "ipv4", 00:27:22.068 "trsvcid": "$NVMF_PORT", 00:27:22.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.068 "hdgst": ${hdgst:-false}, 00:27:22.068 "ddgst": ${ddgst:-false} 00:27:22.068 }, 00:27:22.068 "method": "bdev_nvme_attach_controller" 00:27:22.068 } 00:27:22.068 EOF 00:27:22.068 )") 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.068 { 00:27:22.068 "params": { 00:27:22.068 "name": "Nvme$subsystem", 00:27:22.068 "trtype": "$TEST_TRANSPORT", 00:27:22.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.068 "adrfam": "ipv4", 00:27:22.068 "trsvcid": "$NVMF_PORT", 00:27:22.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.068 "hdgst": ${hdgst:-false}, 00:27:22.068 "ddgst": ${ddgst:-false} 00:27:22.068 }, 00:27:22.068 "method": "bdev_nvme_attach_controller" 00:27:22.068 } 00:27:22.068 EOF 00:27:22.068 )") 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.068 { 00:27:22.068 "params": { 00:27:22.068 "name": "Nvme$subsystem", 00:27:22.068 "trtype": "$TEST_TRANSPORT", 00:27:22.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.068 "adrfam": "ipv4", 00:27:22.068 "trsvcid": "$NVMF_PORT", 00:27:22.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.068 "hdgst": ${hdgst:-false}, 00:27:22.068 "ddgst": ${ddgst:-false} 00:27:22.068 }, 00:27:22.068 "method": "bdev_nvme_attach_controller" 00:27:22.068 } 00:27:22.068 EOF 00:27:22.068 )") 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:22.068 { 00:27:22.068 "params": { 00:27:22.068 "name": "Nvme$subsystem", 00:27:22.068 "trtype": "$TEST_TRANSPORT", 00:27:22.068 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:22.068 "adrfam": "ipv4", 00:27:22.068 "trsvcid": "$NVMF_PORT", 00:27:22.068 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:22.068 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:22.068 "hdgst": ${hdgst:-false}, 00:27:22.068 "ddgst": ${ddgst:-false} 00:27:22.068 }, 00:27:22.068 "method": "bdev_nvme_attach_controller" 00:27:22.068 } 00:27:22.068 EOF 00:27:22.068 )") 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:22.068 08:14:13 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:22.068 "params": { 00:27:22.068 "name": "Nvme1", 00:27:22.068 "trtype": "tcp", 00:27:22.068 "traddr": "10.0.0.2", 00:27:22.068 "adrfam": "ipv4", 00:27:22.068 "trsvcid": "4420", 00:27:22.068 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:22.068 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:22.068 "hdgst": false, 00:27:22.068 "ddgst": false 00:27:22.068 }, 00:27:22.068 "method": "bdev_nvme_attach_controller" 00:27:22.068 },{ 00:27:22.068 "params": { 00:27:22.068 "name": "Nvme2", 00:27:22.068 "trtype": "tcp", 00:27:22.068 "traddr": "10.0.0.2", 00:27:22.068 "adrfam": "ipv4", 00:27:22.068 "trsvcid": "4420", 00:27:22.068 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:22.068 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:22.068 "hdgst": false, 00:27:22.068 "ddgst": false 00:27:22.068 }, 00:27:22.068 "method": "bdev_nvme_attach_controller" 00:27:22.068 },{ 00:27:22.068 "params": { 00:27:22.068 "name": "Nvme3", 00:27:22.068 "trtype": "tcp", 00:27:22.068 "traddr": "10.0.0.2", 00:27:22.068 "adrfam": "ipv4", 00:27:22.068 "trsvcid": "4420", 00:27:22.068 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:22.068 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:22.068 "hdgst": false, 00:27:22.068 "ddgst": false 00:27:22.068 }, 00:27:22.068 "method": "bdev_nvme_attach_controller" 00:27:22.068 },{ 00:27:22.068 "params": { 00:27:22.068 "name": "Nvme4", 00:27:22.068 "trtype": "tcp", 00:27:22.068 "traddr": "10.0.0.2", 00:27:22.068 "adrfam": "ipv4", 00:27:22.068 "trsvcid": "4420", 00:27:22.068 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:22.068 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:22.068 "hdgst": false, 00:27:22.068 "ddgst": false 00:27:22.068 }, 00:27:22.068 "method": "bdev_nvme_attach_controller" 00:27:22.068 },{ 00:27:22.068 "params": { 00:27:22.068 "name": "Nvme5", 00:27:22.068 "trtype": "tcp", 00:27:22.068 "traddr": "10.0.0.2", 00:27:22.068 "adrfam": "ipv4", 00:27:22.068 "trsvcid": "4420", 00:27:22.068 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:22.069 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:22.069 "hdgst": false, 00:27:22.069 "ddgst": false 00:27:22.069 }, 00:27:22.069 "method": "bdev_nvme_attach_controller" 00:27:22.069 },{ 00:27:22.069 "params": { 00:27:22.069 "name": "Nvme6", 00:27:22.069 "trtype": "tcp", 00:27:22.069 "traddr": "10.0.0.2", 00:27:22.069 "adrfam": "ipv4", 00:27:22.069 "trsvcid": "4420", 00:27:22.069 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:22.069 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:22.069 "hdgst": false, 00:27:22.069 "ddgst": false 00:27:22.069 }, 00:27:22.069 "method": "bdev_nvme_attach_controller" 00:27:22.069 },{ 00:27:22.069 "params": { 00:27:22.069 "name": "Nvme7", 00:27:22.069 "trtype": "tcp", 00:27:22.069 "traddr": "10.0.0.2", 00:27:22.069 "adrfam": "ipv4", 00:27:22.069 "trsvcid": "4420", 00:27:22.069 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:22.069 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:22.069 "hdgst": false, 00:27:22.069 "ddgst": false 00:27:22.069 }, 00:27:22.069 "method": "bdev_nvme_attach_controller" 00:27:22.069 },{ 00:27:22.069 "params": { 00:27:22.069 "name": "Nvme8", 00:27:22.069 "trtype": "tcp", 00:27:22.069 "traddr": "10.0.0.2", 00:27:22.069 "adrfam": "ipv4", 00:27:22.069 "trsvcid": "4420", 00:27:22.069 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:22.069 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:22.069 "hdgst": false, 00:27:22.069 "ddgst": false 00:27:22.069 }, 00:27:22.069 "method": "bdev_nvme_attach_controller" 00:27:22.069 },{ 00:27:22.069 "params": { 00:27:22.069 "name": "Nvme9", 00:27:22.069 "trtype": "tcp", 00:27:22.069 "traddr": "10.0.0.2", 00:27:22.069 "adrfam": "ipv4", 00:27:22.069 "trsvcid": "4420", 00:27:22.069 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:22.069 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:22.069 "hdgst": false, 00:27:22.069 "ddgst": false 00:27:22.069 }, 00:27:22.069 "method": "bdev_nvme_attach_controller" 00:27:22.069 },{ 00:27:22.069 "params": { 00:27:22.069 "name": "Nvme10", 00:27:22.069 "trtype": "tcp", 00:27:22.069 "traddr": "10.0.0.2", 00:27:22.069 "adrfam": "ipv4", 00:27:22.069 "trsvcid": "4420", 00:27:22.069 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:22.069 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:22.069 "hdgst": false, 00:27:22.069 "ddgst": false 00:27:22.069 }, 00:27:22.069 "method": "bdev_nvme_attach_controller" 00:27:22.069 }' 00:27:22.069 [2024-07-13 08:14:13.671539] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:22.069 [2024-07-13 08:14:13.671626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2040481 ] 00:27:22.069 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.069 [2024-07-13 08:14:13.737699] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.328 [2024-07-13 08:14:13.827658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.228 Running I/O for 1 seconds... 00:27:25.161 00:27:25.161 Latency(us) 00:27:25.161 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:25.161 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.161 Verification LBA range: start 0x0 length 0x400 00:27:25.161 Nvme1n1 : 1.15 223.02 13.94 0.00 0.00 284209.11 20777.34 264085.81 00:27:25.161 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.161 Verification LBA range: start 0x0 length 0x400 00:27:25.161 Nvme2n1 : 1.13 169.17 10.57 0.00 0.00 367594.26 41360.50 307582.29 00:27:25.161 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.161 Verification LBA range: start 0x0 length 0x400 00:27:25.161 Nvme3n1 : 1.15 277.80 17.36 0.00 0.00 220594.78 15728.64 256318.58 00:27:25.161 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.161 Verification LBA range: start 0x0 length 0x400 00:27:25.161 Nvme4n1 : 1.13 225.88 14.12 0.00 0.00 266726.40 16990.81 259425.47 00:27:25.161 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.161 Verification LBA range: start 0x0 length 0x400 00:27:25.161 Nvme5n1 : 1.16 220.14 13.76 0.00 0.00 269045.19 22622.06 278066.82 00:27:25.161 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.161 Verification LBA range: start 0x0 length 0x400 00:27:25.161 Nvme6n1 : 1.18 217.66 13.60 0.00 0.00 268187.88 19126.80 284280.60 00:27:25.161 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.161 Verification LBA range: start 0x0 length 0x400 00:27:25.161 Nvme7n1 : 1.12 232.14 14.51 0.00 0.00 244315.97 3956.43 256318.58 00:27:25.161 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.161 Verification LBA range: start 0x0 length 0x400 00:27:25.161 Nvme8n1 : 1.16 274.95 17.18 0.00 0.00 204658.01 17282.09 233016.89 00:27:25.161 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.161 Verification LBA range: start 0x0 length 0x400 00:27:25.161 Nvme9n1 : 1.16 221.24 13.83 0.00 0.00 250004.29 21748.24 260978.92 00:27:25.161 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:25.161 Verification LBA range: start 0x0 length 0x400 00:27:25.161 Nvme10n1 : 1.17 218.78 13.67 0.00 0.00 248836.93 25243.50 281173.71 00:27:25.161 =================================================================================================================== 00:27:25.161 Total : 2280.78 142.55 0.00 0.00 257398.24 3956.43 307582.29 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:25.418 rmmod nvme_tcp 00:27:25.418 rmmod nvme_fabrics 00:27:25.418 rmmod nvme_keyring 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2039885 ']' 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2039885 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 2039885 ']' 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 2039885 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2039885 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2039885' 00:27:25.418 killing process with pid 2039885 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 2039885 00:27:25.418 08:14:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 2039885 00:27:25.986 08:14:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:25.986 08:14:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:25.986 08:14:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:25.986 08:14:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:25.986 08:14:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:25.986 08:14:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.986 08:14:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:25.986 08:14:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.890 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:27.890 00:27:27.890 real 0m11.865s 00:27:27.890 user 0m34.797s 00:27:27.890 sys 0m3.232s 00:27:27.890 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:27.890 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:27.890 ************************************ 00:27:27.890 END TEST nvmf_shutdown_tc1 00:27:27.890 ************************************ 00:27:27.890 08:14:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:27.890 08:14:19 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:27.890 08:14:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:27.890 08:14:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:27.890 08:14:19 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:27.890 ************************************ 00:27:27.890 START TEST nvmf_shutdown_tc2 00:27:27.890 ************************************ 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:27.891 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:27.891 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:27.891 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:27.891 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:27.891 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:28.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:28.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:27:28.150 00:27:28.150 --- 10.0.0.2 ping statistics --- 00:27:28.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.150 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:28.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:28.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.207 ms 00:27:28.150 00:27:28.150 --- 10.0.0.1 ping statistics --- 00:27:28.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.150 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2041244 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2041244 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2041244 ']' 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:28.150 08:14:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:28.150 [2024-07-13 08:14:19.822476] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:28.150 [2024-07-13 08:14:19.822560] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:28.150 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.407 [2024-07-13 08:14:19.888984] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:28.407 [2024-07-13 08:14:19.978440] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:28.407 [2024-07-13 08:14:19.978518] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:28.407 [2024-07-13 08:14:19.978546] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:28.407 [2024-07-13 08:14:19.978557] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:28.407 [2024-07-13 08:14:19.978567] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:28.407 [2024-07-13 08:14:19.978852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:28.407 [2024-07-13 08:14:19.978911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:28.407 [2024-07-13 08:14:19.978978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:28.407 [2024-07-13 08:14:19.978981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.407 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:28.407 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:28.407 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:28.407 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:28.407 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:28.407 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:28.407 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:28.407 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.407 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:28.407 [2024-07-13 08:14:20.130595] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.407 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.407 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:28.407 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:28.407 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:28.407 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.665 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:28.665 Malloc1 00:27:28.665 [2024-07-13 08:14:20.216344] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:28.665 Malloc2 00:27:28.665 Malloc3 00:27:28.665 Malloc4 00:27:28.665 Malloc5 00:27:28.924 Malloc6 00:27:28.924 Malloc7 00:27:28.924 Malloc8 00:27:28.924 Malloc9 00:27:28.924 Malloc10 00:27:28.924 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.924 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:28.924 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:28.924 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2041425 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2041425 /var/tmp/bdevperf.sock 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2041425 ']' 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:29.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.183 { 00:27:29.183 "params": { 00:27:29.183 "name": "Nvme$subsystem", 00:27:29.183 "trtype": "$TEST_TRANSPORT", 00:27:29.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.183 "adrfam": "ipv4", 00:27:29.183 "trsvcid": "$NVMF_PORT", 00:27:29.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.183 "hdgst": ${hdgst:-false}, 00:27:29.183 "ddgst": ${ddgst:-false} 00:27:29.183 }, 00:27:29.183 "method": "bdev_nvme_attach_controller" 00:27:29.183 } 00:27:29.183 EOF 00:27:29.183 )") 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.183 { 00:27:29.183 "params": { 00:27:29.183 "name": "Nvme$subsystem", 00:27:29.183 "trtype": "$TEST_TRANSPORT", 00:27:29.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.183 "adrfam": "ipv4", 00:27:29.183 "trsvcid": "$NVMF_PORT", 00:27:29.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.183 "hdgst": ${hdgst:-false}, 00:27:29.183 "ddgst": ${ddgst:-false} 00:27:29.183 }, 00:27:29.183 "method": "bdev_nvme_attach_controller" 00:27:29.183 } 00:27:29.183 EOF 00:27:29.183 )") 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.183 { 00:27:29.183 "params": { 00:27:29.183 "name": "Nvme$subsystem", 00:27:29.183 "trtype": "$TEST_TRANSPORT", 00:27:29.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.183 "adrfam": "ipv4", 00:27:29.183 "trsvcid": "$NVMF_PORT", 00:27:29.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.183 "hdgst": ${hdgst:-false}, 00:27:29.183 "ddgst": ${ddgst:-false} 00:27:29.183 }, 00:27:29.183 "method": "bdev_nvme_attach_controller" 00:27:29.183 } 00:27:29.183 EOF 00:27:29.183 )") 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.183 { 00:27:29.183 "params": { 00:27:29.183 "name": "Nvme$subsystem", 00:27:29.183 "trtype": "$TEST_TRANSPORT", 00:27:29.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.183 "adrfam": "ipv4", 00:27:29.183 "trsvcid": "$NVMF_PORT", 00:27:29.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.183 "hdgst": ${hdgst:-false}, 00:27:29.183 "ddgst": ${ddgst:-false} 00:27:29.183 }, 00:27:29.183 "method": "bdev_nvme_attach_controller" 00:27:29.183 } 00:27:29.183 EOF 00:27:29.183 )") 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.183 { 00:27:29.183 "params": { 00:27:29.183 "name": "Nvme$subsystem", 00:27:29.183 "trtype": "$TEST_TRANSPORT", 00:27:29.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.183 "adrfam": "ipv4", 00:27:29.183 "trsvcid": "$NVMF_PORT", 00:27:29.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.183 "hdgst": ${hdgst:-false}, 00:27:29.183 "ddgst": ${ddgst:-false} 00:27:29.183 }, 00:27:29.183 "method": "bdev_nvme_attach_controller" 00:27:29.183 } 00:27:29.183 EOF 00:27:29.183 )") 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.183 { 00:27:29.183 "params": { 00:27:29.183 "name": "Nvme$subsystem", 00:27:29.183 "trtype": "$TEST_TRANSPORT", 00:27:29.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.183 "adrfam": "ipv4", 00:27:29.183 "trsvcid": "$NVMF_PORT", 00:27:29.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.183 "hdgst": ${hdgst:-false}, 00:27:29.183 "ddgst": ${ddgst:-false} 00:27:29.183 }, 00:27:29.183 "method": "bdev_nvme_attach_controller" 00:27:29.183 } 00:27:29.183 EOF 00:27:29.183 )") 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.183 { 00:27:29.183 "params": { 00:27:29.183 "name": "Nvme$subsystem", 00:27:29.183 "trtype": "$TEST_TRANSPORT", 00:27:29.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.183 "adrfam": "ipv4", 00:27:29.183 "trsvcid": "$NVMF_PORT", 00:27:29.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.183 "hdgst": ${hdgst:-false}, 00:27:29.183 "ddgst": ${ddgst:-false} 00:27:29.183 }, 00:27:29.183 "method": "bdev_nvme_attach_controller" 00:27:29.183 } 00:27:29.183 EOF 00:27:29.183 )") 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.183 { 00:27:29.183 "params": { 00:27:29.183 "name": "Nvme$subsystem", 00:27:29.183 "trtype": "$TEST_TRANSPORT", 00:27:29.183 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.183 "adrfam": "ipv4", 00:27:29.183 "trsvcid": "$NVMF_PORT", 00:27:29.183 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.183 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.183 "hdgst": ${hdgst:-false}, 00:27:29.183 "ddgst": ${ddgst:-false} 00:27:29.183 }, 00:27:29.183 "method": "bdev_nvme_attach_controller" 00:27:29.183 } 00:27:29.183 EOF 00:27:29.183 )") 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:29.183 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.184 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.184 { 00:27:29.184 "params": { 00:27:29.184 "name": "Nvme$subsystem", 00:27:29.184 "trtype": "$TEST_TRANSPORT", 00:27:29.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.184 "adrfam": "ipv4", 00:27:29.184 "trsvcid": "$NVMF_PORT", 00:27:29.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.184 "hdgst": ${hdgst:-false}, 00:27:29.184 "ddgst": ${ddgst:-false} 00:27:29.184 }, 00:27:29.184 "method": "bdev_nvme_attach_controller" 00:27:29.184 } 00:27:29.184 EOF 00:27:29.184 )") 00:27:29.184 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:29.184 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:29.184 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:29.184 { 00:27:29.184 "params": { 00:27:29.184 "name": "Nvme$subsystem", 00:27:29.184 "trtype": "$TEST_TRANSPORT", 00:27:29.184 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:29.184 "adrfam": "ipv4", 00:27:29.184 "trsvcid": "$NVMF_PORT", 00:27:29.184 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:29.184 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:29.184 "hdgst": ${hdgst:-false}, 00:27:29.184 "ddgst": ${ddgst:-false} 00:27:29.184 }, 00:27:29.184 "method": "bdev_nvme_attach_controller" 00:27:29.184 } 00:27:29.184 EOF 00:27:29.184 )") 00:27:29.184 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:29.184 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:29.184 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:29.184 08:14:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:29.184 "params": { 00:27:29.184 "name": "Nvme1", 00:27:29.184 "trtype": "tcp", 00:27:29.184 "traddr": "10.0.0.2", 00:27:29.184 "adrfam": "ipv4", 00:27:29.184 "trsvcid": "4420", 00:27:29.184 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:29.184 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:29.184 "hdgst": false, 00:27:29.184 "ddgst": false 00:27:29.184 }, 00:27:29.184 "method": "bdev_nvme_attach_controller" 00:27:29.184 },{ 00:27:29.184 "params": { 00:27:29.184 "name": "Nvme2", 00:27:29.184 "trtype": "tcp", 00:27:29.184 "traddr": "10.0.0.2", 00:27:29.184 "adrfam": "ipv4", 00:27:29.184 "trsvcid": "4420", 00:27:29.184 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:29.184 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:29.184 "hdgst": false, 00:27:29.184 "ddgst": false 00:27:29.184 }, 00:27:29.184 "method": "bdev_nvme_attach_controller" 00:27:29.184 },{ 00:27:29.184 "params": { 00:27:29.184 "name": "Nvme3", 00:27:29.184 "trtype": "tcp", 00:27:29.184 "traddr": "10.0.0.2", 00:27:29.184 "adrfam": "ipv4", 00:27:29.184 "trsvcid": "4420", 00:27:29.184 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:29.184 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:29.184 "hdgst": false, 00:27:29.184 "ddgst": false 00:27:29.184 }, 00:27:29.184 "method": "bdev_nvme_attach_controller" 00:27:29.184 },{ 00:27:29.184 "params": { 00:27:29.184 "name": "Nvme4", 00:27:29.184 "trtype": "tcp", 00:27:29.184 "traddr": "10.0.0.2", 00:27:29.184 "adrfam": "ipv4", 00:27:29.184 "trsvcid": "4420", 00:27:29.184 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:29.184 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:29.184 "hdgst": false, 00:27:29.184 "ddgst": false 00:27:29.184 }, 00:27:29.184 "method": "bdev_nvme_attach_controller" 00:27:29.184 },{ 00:27:29.184 "params": { 00:27:29.184 "name": "Nvme5", 00:27:29.184 "trtype": "tcp", 00:27:29.184 "traddr": "10.0.0.2", 00:27:29.184 "adrfam": "ipv4", 00:27:29.184 "trsvcid": "4420", 00:27:29.184 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:29.184 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:29.184 "hdgst": false, 00:27:29.184 "ddgst": false 00:27:29.184 }, 00:27:29.184 "method": "bdev_nvme_attach_controller" 00:27:29.184 },{ 00:27:29.184 "params": { 00:27:29.184 "name": "Nvme6", 00:27:29.184 "trtype": "tcp", 00:27:29.184 "traddr": "10.0.0.2", 00:27:29.184 "adrfam": "ipv4", 00:27:29.184 "trsvcid": "4420", 00:27:29.184 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:29.184 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:29.184 "hdgst": false, 00:27:29.184 "ddgst": false 00:27:29.184 }, 00:27:29.184 "method": "bdev_nvme_attach_controller" 00:27:29.184 },{ 00:27:29.184 "params": { 00:27:29.184 "name": "Nvme7", 00:27:29.184 "trtype": "tcp", 00:27:29.184 "traddr": "10.0.0.2", 00:27:29.184 "adrfam": "ipv4", 00:27:29.184 "trsvcid": "4420", 00:27:29.184 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:29.184 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:29.184 "hdgst": false, 00:27:29.184 "ddgst": false 00:27:29.184 }, 00:27:29.184 "method": "bdev_nvme_attach_controller" 00:27:29.184 },{ 00:27:29.184 "params": { 00:27:29.184 "name": "Nvme8", 00:27:29.184 "trtype": "tcp", 00:27:29.184 "traddr": "10.0.0.2", 00:27:29.184 "adrfam": "ipv4", 00:27:29.184 "trsvcid": "4420", 00:27:29.184 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:29.184 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:29.184 "hdgst": false, 00:27:29.184 "ddgst": false 00:27:29.184 }, 00:27:29.184 "method": "bdev_nvme_attach_controller" 00:27:29.184 },{ 00:27:29.184 "params": { 00:27:29.184 "name": "Nvme9", 00:27:29.184 "trtype": "tcp", 00:27:29.184 "traddr": "10.0.0.2", 00:27:29.184 "adrfam": "ipv4", 00:27:29.184 "trsvcid": "4420", 00:27:29.184 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:29.184 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:29.184 "hdgst": false, 00:27:29.184 "ddgst": false 00:27:29.184 }, 00:27:29.184 "method": "bdev_nvme_attach_controller" 00:27:29.184 },{ 00:27:29.184 "params": { 00:27:29.184 "name": "Nvme10", 00:27:29.184 "trtype": "tcp", 00:27:29.184 "traddr": "10.0.0.2", 00:27:29.184 "adrfam": "ipv4", 00:27:29.184 "trsvcid": "4420", 00:27:29.184 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:29.184 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:29.184 "hdgst": false, 00:27:29.184 "ddgst": false 00:27:29.184 }, 00:27:29.184 "method": "bdev_nvme_attach_controller" 00:27:29.184 }' 00:27:29.184 [2024-07-13 08:14:20.712948] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:29.184 [2024-07-13 08:14:20.713024] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2041425 ] 00:27:29.184 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.184 [2024-07-13 08:14:20.776688] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.184 [2024-07-13 08:14:20.863159] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:31.087 Running I/O for 10 seconds... 00:27:31.087 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:31.087 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:31.087 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:31.087 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.087 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.087 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.088 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:31.088 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:31.088 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:31.088 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:31.088 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:31.088 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:31.088 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:31.088 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:31.088 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:31.088 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.088 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.088 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.088 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:31.088 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:31.088 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:31.345 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:31.345 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:31.345 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:31.345 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:31.345 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.345 08:14:22 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.345 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.345 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:31.345 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:31.345 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:31.601 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:31.601 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:31.601 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:31.601 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:31.601 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.601 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:31.601 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.601 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:31.601 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:31.601 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:31.601 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:31.601 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:31.601 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2041425 00:27:31.601 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2041425 ']' 00:27:31.601 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2041425 00:27:31.601 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:27:31.601 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:31.601 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2041425 00:27:31.896 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:31.896 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:31.896 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2041425' 00:27:31.896 killing process with pid 2041425 00:27:31.896 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2041425 00:27:31.896 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2041425 00:27:31.896 Received shutdown signal, test time was about 0.931955 seconds 00:27:31.896 00:27:31.896 Latency(us) 00:27:31.896 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:31.896 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:31.896 Verification LBA range: start 0x0 length 0x400 00:27:31.896 Nvme1n1 : 0.90 213.17 13.32 0.00 0.00 296675.05 21942.42 271853.04 00:27:31.896 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:31.896 Verification LBA range: start 0x0 length 0x400 00:27:31.896 Nvme2n1 : 0.93 274.93 17.18 0.00 0.00 225508.50 19029.71 253211.69 00:27:31.896 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:31.896 Verification LBA range: start 0x0 length 0x400 00:27:31.896 Nvme3n1 : 0.90 214.32 13.39 0.00 0.00 281881.92 21456.97 248551.35 00:27:31.896 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:31.896 Verification LBA range: start 0x0 length 0x400 00:27:31.896 Nvme4n1 : 0.91 281.39 17.59 0.00 0.00 210992.73 14854.83 253211.69 00:27:31.896 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:31.896 Verification LBA range: start 0x0 length 0x400 00:27:31.896 Nvme5n1 : 0.91 210.25 13.14 0.00 0.00 276271.66 26214.40 273406.48 00:27:31.896 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:31.896 Verification LBA range: start 0x0 length 0x400 00:27:31.896 Nvme6n1 : 0.88 218.27 13.64 0.00 0.00 259349.62 23107.51 246997.90 00:27:31.896 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:31.896 Verification LBA range: start 0x0 length 0x400 00:27:31.896 Nvme7n1 : 0.92 207.96 13.00 0.00 0.00 267874.86 40972.14 267192.70 00:27:31.896 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:31.896 Verification LBA range: start 0x0 length 0x400 00:27:31.896 Nvme8n1 : 0.93 276.48 17.28 0.00 0.00 197066.15 16602.45 257872.02 00:27:31.896 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:31.896 Verification LBA range: start 0x0 length 0x400 00:27:31.896 Nvme9n1 : 0.92 208.66 13.04 0.00 0.00 254931.75 22913.33 285834.05 00:27:31.896 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:31.896 Verification LBA range: start 0x0 length 0x400 00:27:31.896 Nvme10n1 : 0.89 214.61 13.41 0.00 0.00 240610.16 21554.06 281173.71 00:27:31.896 =================================================================================================================== 00:27:31.896 Total : 2320.03 145.00 0.00 0.00 247486.50 14854.83 285834.05 00:27:32.155 08:14:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:33.089 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2041244 00:27:33.089 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:33.089 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:33.089 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:33.089 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:33.089 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:33.089 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:33.089 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:33.089 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:33.089 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:33.089 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:33.090 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:33.090 rmmod nvme_tcp 00:27:33.090 rmmod nvme_fabrics 00:27:33.090 rmmod nvme_keyring 00:27:33.090 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:33.090 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:33.090 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:33.090 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2041244 ']' 00:27:33.090 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2041244 00:27:33.090 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 2041244 ']' 00:27:33.090 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 2041244 00:27:33.090 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:27:33.090 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:33.090 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2041244 00:27:33.090 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:33.090 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:33.090 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2041244' 00:27:33.090 killing process with pid 2041244 00:27:33.090 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 2041244 00:27:33.090 08:14:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 2041244 00:27:33.658 08:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:33.658 08:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:33.658 08:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:33.658 08:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:33.658 08:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:33.658 08:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.658 08:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:33.658 08:14:25 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.564 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:35.564 00:27:35.564 real 0m7.693s 00:27:35.564 user 0m23.084s 00:27:35.564 sys 0m1.530s 00:27:35.564 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:35.564 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:35.564 ************************************ 00:27:35.564 END TEST nvmf_shutdown_tc2 00:27:35.564 ************************************ 00:27:35.564 08:14:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:35.564 08:14:27 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:27:35.564 08:14:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:35.564 08:14:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:35.564 08:14:27 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:35.822 ************************************ 00:27:35.822 START TEST nvmf_shutdown_tc3 00:27:35.822 ************************************ 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:35.822 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:35.822 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:35.823 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:35.823 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:35.823 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:35.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:35.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:27:35.823 00:27:35.823 --- 10.0.0.2 ping statistics --- 00:27:35.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.823 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:35.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:35.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:27:35.823 00:27:35.823 --- 10.0.0.1 ping statistics --- 00:27:35.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:35.823 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2042325 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2042325 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2042325 ']' 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:35.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:35.823 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:35.823 [2024-07-13 08:14:27.546283] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:35.823 [2024-07-13 08:14:27.546373] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:36.082 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.082 [2024-07-13 08:14:27.615564] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:36.082 [2024-07-13 08:14:27.704908] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:36.082 [2024-07-13 08:14:27.704984] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:36.082 [2024-07-13 08:14:27.705012] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:36.082 [2024-07-13 08:14:27.705030] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:36.082 [2024-07-13 08:14:27.705041] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:36.082 [2024-07-13 08:14:27.705092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:36.082 [2024-07-13 08:14:27.705153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:36.082 [2024-07-13 08:14:27.705221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:36.082 [2024-07-13 08:14:27.705224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:36.341 [2024-07-13 08:14:27.867968] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.341 08:14:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:36.341 Malloc1 00:27:36.341 [2024-07-13 08:14:27.957357] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:36.341 Malloc2 00:27:36.341 Malloc3 00:27:36.599 Malloc4 00:27:36.599 Malloc5 00:27:36.599 Malloc6 00:27:36.599 Malloc7 00:27:36.599 Malloc8 00:27:36.599 Malloc9 00:27:36.858 Malloc10 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2042506 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2042506 /var/tmp/bdevperf.sock 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 2042506 ']' 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:36.858 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.858 { 00:27:36.858 "params": { 00:27:36.858 "name": "Nvme$subsystem", 00:27:36.858 "trtype": "$TEST_TRANSPORT", 00:27:36.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.858 "adrfam": "ipv4", 00:27:36.858 "trsvcid": "$NVMF_PORT", 00:27:36.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.858 "hdgst": ${hdgst:-false}, 00:27:36.858 "ddgst": ${ddgst:-false} 00:27:36.858 }, 00:27:36.858 "method": "bdev_nvme_attach_controller" 00:27:36.858 } 00:27:36.858 EOF 00:27:36.858 )") 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.858 { 00:27:36.858 "params": { 00:27:36.858 "name": "Nvme$subsystem", 00:27:36.858 "trtype": "$TEST_TRANSPORT", 00:27:36.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.858 "adrfam": "ipv4", 00:27:36.858 "trsvcid": "$NVMF_PORT", 00:27:36.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.858 "hdgst": ${hdgst:-false}, 00:27:36.858 "ddgst": ${ddgst:-false} 00:27:36.858 }, 00:27:36.858 "method": "bdev_nvme_attach_controller" 00:27:36.858 } 00:27:36.858 EOF 00:27:36.858 )") 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.858 { 00:27:36.858 "params": { 00:27:36.858 "name": "Nvme$subsystem", 00:27:36.858 "trtype": "$TEST_TRANSPORT", 00:27:36.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.858 "adrfam": "ipv4", 00:27:36.858 "trsvcid": "$NVMF_PORT", 00:27:36.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.858 "hdgst": ${hdgst:-false}, 00:27:36.858 "ddgst": ${ddgst:-false} 00:27:36.858 }, 00:27:36.858 "method": "bdev_nvme_attach_controller" 00:27:36.858 } 00:27:36.858 EOF 00:27:36.858 )") 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.858 { 00:27:36.858 "params": { 00:27:36.858 "name": "Nvme$subsystem", 00:27:36.858 "trtype": "$TEST_TRANSPORT", 00:27:36.858 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.858 "adrfam": "ipv4", 00:27:36.858 "trsvcid": "$NVMF_PORT", 00:27:36.858 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.858 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.858 "hdgst": ${hdgst:-false}, 00:27:36.858 "ddgst": ${ddgst:-false} 00:27:36.858 }, 00:27:36.858 "method": "bdev_nvme_attach_controller" 00:27:36.858 } 00:27:36.858 EOF 00:27:36.858 )") 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.858 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.858 { 00:27:36.858 "params": { 00:27:36.858 "name": "Nvme$subsystem", 00:27:36.859 "trtype": "$TEST_TRANSPORT", 00:27:36.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.859 "adrfam": "ipv4", 00:27:36.859 "trsvcid": "$NVMF_PORT", 00:27:36.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.859 "hdgst": ${hdgst:-false}, 00:27:36.859 "ddgst": ${ddgst:-false} 00:27:36.859 }, 00:27:36.859 "method": "bdev_nvme_attach_controller" 00:27:36.859 } 00:27:36.859 EOF 00:27:36.859 )") 00:27:36.859 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:36.859 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.859 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.859 { 00:27:36.859 "params": { 00:27:36.859 "name": "Nvme$subsystem", 00:27:36.859 "trtype": "$TEST_TRANSPORT", 00:27:36.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.859 "adrfam": "ipv4", 00:27:36.859 "trsvcid": "$NVMF_PORT", 00:27:36.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.859 "hdgst": ${hdgst:-false}, 00:27:36.859 "ddgst": ${ddgst:-false} 00:27:36.859 }, 00:27:36.859 "method": "bdev_nvme_attach_controller" 00:27:36.859 } 00:27:36.859 EOF 00:27:36.859 )") 00:27:36.859 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:36.859 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.859 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.859 { 00:27:36.859 "params": { 00:27:36.859 "name": "Nvme$subsystem", 00:27:36.859 "trtype": "$TEST_TRANSPORT", 00:27:36.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.859 "adrfam": "ipv4", 00:27:36.859 "trsvcid": "$NVMF_PORT", 00:27:36.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.859 "hdgst": ${hdgst:-false}, 00:27:36.859 "ddgst": ${ddgst:-false} 00:27:36.859 }, 00:27:36.859 "method": "bdev_nvme_attach_controller" 00:27:36.859 } 00:27:36.859 EOF 00:27:36.859 )") 00:27:36.859 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:36.859 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.859 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.859 { 00:27:36.859 "params": { 00:27:36.859 "name": "Nvme$subsystem", 00:27:36.859 "trtype": "$TEST_TRANSPORT", 00:27:36.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.859 "adrfam": "ipv4", 00:27:36.859 "trsvcid": "$NVMF_PORT", 00:27:36.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.859 "hdgst": ${hdgst:-false}, 00:27:36.859 "ddgst": ${ddgst:-false} 00:27:36.859 }, 00:27:36.859 "method": "bdev_nvme_attach_controller" 00:27:36.859 } 00:27:36.859 EOF 00:27:36.859 )") 00:27:36.859 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:36.859 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.859 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.859 { 00:27:36.859 "params": { 00:27:36.859 "name": "Nvme$subsystem", 00:27:36.859 "trtype": "$TEST_TRANSPORT", 00:27:36.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.859 "adrfam": "ipv4", 00:27:36.859 "trsvcid": "$NVMF_PORT", 00:27:36.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.859 "hdgst": ${hdgst:-false}, 00:27:36.859 "ddgst": ${ddgst:-false} 00:27:36.859 }, 00:27:36.859 "method": "bdev_nvme_attach_controller" 00:27:36.859 } 00:27:36.859 EOF 00:27:36.859 )") 00:27:36.859 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:36.859 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:36.859 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:36.859 { 00:27:36.859 "params": { 00:27:36.859 "name": "Nvme$subsystem", 00:27:36.859 "trtype": "$TEST_TRANSPORT", 00:27:36.859 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:36.859 "adrfam": "ipv4", 00:27:36.859 "trsvcid": "$NVMF_PORT", 00:27:36.859 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:36.859 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:36.859 "hdgst": ${hdgst:-false}, 00:27:36.859 "ddgst": ${ddgst:-false} 00:27:36.859 }, 00:27:36.859 "method": "bdev_nvme_attach_controller" 00:27:36.859 } 00:27:36.859 EOF 00:27:36.859 )") 00:27:36.859 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:27:36.859 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:27:36.859 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:27:36.859 08:14:28 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:36.859 "params": { 00:27:36.859 "name": "Nvme1", 00:27:36.859 "trtype": "tcp", 00:27:36.859 "traddr": "10.0.0.2", 00:27:36.859 "adrfam": "ipv4", 00:27:36.859 "trsvcid": "4420", 00:27:36.859 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:36.859 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:36.859 "hdgst": false, 00:27:36.859 "ddgst": false 00:27:36.859 }, 00:27:36.859 "method": "bdev_nvme_attach_controller" 00:27:36.859 },{ 00:27:36.859 "params": { 00:27:36.859 "name": "Nvme2", 00:27:36.859 "trtype": "tcp", 00:27:36.859 "traddr": "10.0.0.2", 00:27:36.859 "adrfam": "ipv4", 00:27:36.859 "trsvcid": "4420", 00:27:36.859 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:36.859 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:36.859 "hdgst": false, 00:27:36.859 "ddgst": false 00:27:36.859 }, 00:27:36.859 "method": "bdev_nvme_attach_controller" 00:27:36.859 },{ 00:27:36.859 "params": { 00:27:36.859 "name": "Nvme3", 00:27:36.859 "trtype": "tcp", 00:27:36.859 "traddr": "10.0.0.2", 00:27:36.859 "adrfam": "ipv4", 00:27:36.859 "trsvcid": "4420", 00:27:36.859 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:36.859 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:36.859 "hdgst": false, 00:27:36.859 "ddgst": false 00:27:36.859 }, 00:27:36.859 "method": "bdev_nvme_attach_controller" 00:27:36.859 },{ 00:27:36.859 "params": { 00:27:36.859 "name": "Nvme4", 00:27:36.859 "trtype": "tcp", 00:27:36.859 "traddr": "10.0.0.2", 00:27:36.859 "adrfam": "ipv4", 00:27:36.859 "trsvcid": "4420", 00:27:36.859 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:36.859 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:36.859 "hdgst": false, 00:27:36.859 "ddgst": false 00:27:36.859 }, 00:27:36.859 "method": "bdev_nvme_attach_controller" 00:27:36.859 },{ 00:27:36.859 "params": { 00:27:36.859 "name": "Nvme5", 00:27:36.859 "trtype": "tcp", 00:27:36.859 "traddr": "10.0.0.2", 00:27:36.859 "adrfam": "ipv4", 00:27:36.859 "trsvcid": "4420", 00:27:36.859 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:36.859 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:36.859 "hdgst": false, 00:27:36.859 "ddgst": false 00:27:36.859 }, 00:27:36.859 "method": "bdev_nvme_attach_controller" 00:27:36.859 },{ 00:27:36.859 "params": { 00:27:36.859 "name": "Nvme6", 00:27:36.859 "trtype": "tcp", 00:27:36.859 "traddr": "10.0.0.2", 00:27:36.859 "adrfam": "ipv4", 00:27:36.859 "trsvcid": "4420", 00:27:36.859 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:36.859 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:36.859 "hdgst": false, 00:27:36.859 "ddgst": false 00:27:36.859 }, 00:27:36.859 "method": "bdev_nvme_attach_controller" 00:27:36.859 },{ 00:27:36.859 "params": { 00:27:36.859 "name": "Nvme7", 00:27:36.859 "trtype": "tcp", 00:27:36.859 "traddr": "10.0.0.2", 00:27:36.859 "adrfam": "ipv4", 00:27:36.859 "trsvcid": "4420", 00:27:36.859 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:36.859 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:36.859 "hdgst": false, 00:27:36.859 "ddgst": false 00:27:36.859 }, 00:27:36.859 "method": "bdev_nvme_attach_controller" 00:27:36.859 },{ 00:27:36.859 "params": { 00:27:36.859 "name": "Nvme8", 00:27:36.859 "trtype": "tcp", 00:27:36.859 "traddr": "10.0.0.2", 00:27:36.859 "adrfam": "ipv4", 00:27:36.859 "trsvcid": "4420", 00:27:36.859 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:36.859 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:36.859 "hdgst": false, 00:27:36.859 "ddgst": false 00:27:36.859 }, 00:27:36.859 "method": "bdev_nvme_attach_controller" 00:27:36.859 },{ 00:27:36.859 "params": { 00:27:36.859 "name": "Nvme9", 00:27:36.859 "trtype": "tcp", 00:27:36.859 "traddr": "10.0.0.2", 00:27:36.859 "adrfam": "ipv4", 00:27:36.859 "trsvcid": "4420", 00:27:36.859 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:36.859 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:36.859 "hdgst": false, 00:27:36.859 "ddgst": false 00:27:36.859 }, 00:27:36.859 "method": "bdev_nvme_attach_controller" 00:27:36.859 },{ 00:27:36.859 "params": { 00:27:36.859 "name": "Nvme10", 00:27:36.859 "trtype": "tcp", 00:27:36.859 "traddr": "10.0.0.2", 00:27:36.859 "adrfam": "ipv4", 00:27:36.859 "trsvcid": "4420", 00:27:36.859 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:36.859 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:36.859 "hdgst": false, 00:27:36.859 "ddgst": false 00:27:36.859 }, 00:27:36.859 "method": "bdev_nvme_attach_controller" 00:27:36.859 }' 00:27:36.859 [2024-07-13 08:14:28.451100] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:36.860 [2024-07-13 08:14:28.451202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2042506 ] 00:27:36.860 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.860 [2024-07-13 08:14:28.514731] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:37.118 [2024-07-13 08:14:28.602160] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.022 Running I/O for 10 seconds... 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:39.022 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:39.280 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:39.280 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:39.280 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:39.280 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:39.280 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.280 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:39.280 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.280 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:39.280 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:39.280 08:14:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2042325 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 2042325 ']' 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 2042325 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2042325 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2042325' 00:27:39.550 killing process with pid 2042325 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 2042325 00:27:39.550 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 2042325 00:27:39.550 [2024-07-13 08:14:31.220469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220664] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220676] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220701] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220713] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220725] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220737] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220761] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220810] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220834] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.220995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221096] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221109] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221180] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221192] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221218] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221230] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221242] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.221339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210eaf0 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.222692] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.222726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.222743] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.222755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.222768] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.550 [2024-07-13 08:14:31.222780] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.222792] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.222804] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.222816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.222828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.222840] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.222851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.222864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.222887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.222899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.222911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.222932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.222944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.222956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.222969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.222981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.222992] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223040] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223058] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223259] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223271] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223380] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223392] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.223489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020970 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.225253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.551 [2024-07-13 08:14:31.225287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ef90 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.225299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.225312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ef90 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.225319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.551 [2024-07-13 08:14:31.225325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ef90 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.225333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.225338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210ef90 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.225348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.551 [2024-07-13 08:14:31.225362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.225375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.551 [2024-07-13 08:14:31.225388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.225401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fd350 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.225477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.551 [2024-07-13 08:14:31.225498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.225513] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.551 [2024-07-13 08:14:31.225531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.225545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.551 [2024-07-13 08:14:31.225558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.225573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.551 [2024-07-13 08:14:31.225586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.225608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c26ee0 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.225952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.225978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.551 [2024-07-13 08:14:31.226672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.226696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.551 [2024-07-13 08:14:31.226706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.551 [2024-07-13 08:14:31.226710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.226721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.226726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.226734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.226740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.226746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.226755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.226759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.226769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.226771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.226785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with [2024-07-13 08:14:31.226785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:1the state(5) to be set 00:27:39.552 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.226798] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.226801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.226811] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.226817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.226824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with [2024-07-13 08:14:31.226832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:39.552 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.226851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.226854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.226871] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.226875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.226886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.226893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.226899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.226907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.226912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.226927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.226929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.226939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.226943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.226952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.226959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.226964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.226973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.226977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.226988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1[2024-07-13 08:14:31.226989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.227003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 08:14:31.227004] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.227018] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.227020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with [2024-07-13 08:14:31.227034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:39.552 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.227051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.227065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.227080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.227094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.227109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:1[2024-07-13 08:14:31.227111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.227124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 08:14:31.227125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.227139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.227141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227152] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.227155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.227178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.227182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.227196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227203] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.227211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227215] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.227225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f430 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.227240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.227906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.227956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.552 [2024-07-13 08:14:31.228035] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c2b2e0 was disconnected and freed. reset controller. 00:27:39.552 [2024-07-13 08:14:31.228214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f8f0 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.228246] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f8f0 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.228261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f8f0 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.228287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210f8f0 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.228559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.228584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.228605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.228620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.228636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.228650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.228665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.228678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.228693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.228707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.228722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.228736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.228751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.552 [2024-07-13 08:14:31.228746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.228765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.552 [2024-07-13 08:14:31.228773] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.552 [2024-07-13 08:14:31.228780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.228787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.228794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.228801] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.228809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.228814] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.228823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.228826] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.228839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.228844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.228853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.228858] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.228875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.228879] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.228891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 08:14:31.228893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.228907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.228909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.228919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.228923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.228936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.228940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.228949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.228953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.228961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.228969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.228973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.228982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.228986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.228997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:1[2024-07-13 08:14:31.228998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-13 08:14:31.229013] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229101] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with [2024-07-13 08:14:31.229126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128the state(5) to be set 00:27:39.553 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229198] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128[2024-07-13 08:14:31.229223] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with [2024-07-13 08:14:31.229239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:39.553 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229264] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229302] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229315] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with [2024-07-13 08:14:31.229316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128the state(5) to be set 00:27:39.553 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with [2024-07-13 08:14:31.229330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:39.553 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229343] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229369] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with [2024-07-13 08:14:31.229408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:12the state(5) to be set 00:27:39.553 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with [2024-07-13 08:14:31.229426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:39.553 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229439] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229452] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with [2024-07-13 08:14:31.229502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:12the state(5) to be set 00:27:39.553 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with [2024-07-13 08:14:31.229518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:27:39.553 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229543] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229568] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229581] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with the state(5) to be set 00:27:39.553 [2024-07-13 08:14:31.229593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210fd90 is same with [2024-07-13 08:14:31.229593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:12the state(5) to be set 00:27:39.553 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.553 [2024-07-13 08:14:31.229744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.553 [2024-07-13 08:14:31.229758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.229774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.229787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.229802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.229815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.229830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.229843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.229857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.229878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.229894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.229908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.229922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.229937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.229952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.229969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.229984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.229997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.230011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.230024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.230039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.230051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.230066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.230079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.230093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.230106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.230120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.230133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.230147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.230160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.230174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.230187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.230202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.230215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.230229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.230242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.230257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.230270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.230284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.230297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.230315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.230328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.230342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.230355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.230370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.230382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.230397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.230410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.230424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.230436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.230451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.554 [2024-07-13 08:14:31.230464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.554 [2024-07-13 08:14:31.230497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:39.554 [2024-07-13 08:14:31.230571] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21e2690 was disconnected and freed. reset controller. 00:27:39.554 [2024-07-13 08:14:31.230766] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.230791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.230818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.230832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.230845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.230857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.230878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.230891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.230904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.230917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.230930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.230942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.230959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.230972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.230984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.230996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231045] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231081] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231118] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231130] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231155] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231167] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231303] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231316] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231342] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231354] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231490] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231528] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.231589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201f800 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.232926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.232951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.232964] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.232982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.232995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233032] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233068] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233166] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233273] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.554 [2024-07-13 08:14:31.233297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233408] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233421] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233447] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233484] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233496] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233592] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233605] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233631] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:39.555 [2024-07-13 08:14:31.233644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:39.555 [2024-07-13 08:14:31.233700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233738] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205db10 (9): Bad file descriptor 00:27:39.555 [2024-07-13 08:14:31.233747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205f8c0 (9): Bad file descriptor 00:27:39.555 [2024-07-13 08:14:31.233772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.233797] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x201fca0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.234885] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020160 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.234922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2020160 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235289] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235324] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235336] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235348] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235382] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235394] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235418] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235542] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235661] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235673] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235709] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235808] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235844] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235857] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.555 [2024-07-13 08:14:31.235877] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.556 [2024-07-13 08:14:31.235890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.556 [2024-07-13 08:14:31.235913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.556 [2024-07-13 08:14:31.235926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.556 [2024-07-13 08:14:31.235937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.556 [2024-07-13 08:14:31.235949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.556 [2024-07-13 08:14:31.235960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.556 [2024-07-13 08:14:31.235972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.556 [2024-07-13 08:14:31.235984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.556 [2024-07-13 08:14:31.235996] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.556 [2024-07-13 08:14:31.236007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20204d0 is same with the state(5) to be set 00:27:39.556 [2024-07-13 08:14:31.237490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.237516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.237539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.237555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.237570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.237584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.237600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.237619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.237635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.237648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.237663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.237676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.237691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.237704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.237718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.237731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.237746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.237759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.237774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.237787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.237802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.237815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.237830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.237843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.237858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.237879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.237896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.237909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.237927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.237940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.237955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.237968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.237987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.238001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.238017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.238030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.238044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.238057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.238072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.238086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.238100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.238114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.238128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.238141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.238156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.238169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.238186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.238199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.238216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.238230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.238245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.238259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.238274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.238287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.238302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.238316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.238331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.238347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.238363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.238376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.238392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.238405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.238420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.238433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.238449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.238462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.556 [2024-07-13 08:14:31.238476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.556 [2024-07-13 08:14:31.238489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.238504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.238517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.238533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.238546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.238561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.238574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.238589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.238602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.238618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.238631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.238646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.238659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.238675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.238688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.238707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.238720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.238736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.238749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.238765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.238778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.238793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.238806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.238821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.238834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.238849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.238863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.238887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.238901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.238918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.238932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.238947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.238960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.238975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.238988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.239003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.239016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.239031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.239044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.239059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.239080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.239096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.239109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.239124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.239138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.239153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.239170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.239185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.239199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.239213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.239227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.239242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.239255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.239270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.239283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.239298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.239311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.239326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.239339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.239354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.239367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.239382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20388a0 is same with the state(5) to be set 00:27:39.557 [2024-07-13 08:14:31.239911] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20388a0 was disconnected and freed. reset controller. 00:27:39.557 [2024-07-13 08:14:31.240127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.557 [2024-07-13 08:14:31.240157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205f8c0 with addr=10.0.0.2, port=4420 00:27:39.557 [2024-07-13 08:14:31.240182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f8c0 is same with the state(5) to be set 00:27:39.557 [2024-07-13 08:14:31.240360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.557 [2024-07-13 08:14:31.240385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205db10 with addr=10.0.0.2, port=4420 00:27:39.557 [2024-07-13 08:14:31.240400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205db10 is same with the state(5) to be set 00:27:39.557 [2024-07-13 08:14:31.240459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.240480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.240494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.240507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.240520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.240532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.240545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.240558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.240570] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207f3d0 is same with the state(5) to be set 00:27:39.557 [2024-07-13 08:14:31.240616] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.240636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.240650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.240662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.240675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.240688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.240701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.240714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.240726] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1c40 is same with the state(5) to be set 00:27:39.557 [2024-07-13 08:14:31.240772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.240792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.240806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.240820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.240833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.240850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.240864] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.240895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.240907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55610 is same with the state(5) to be set 00:27:39.557 [2024-07-13 08:14:31.240959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.240979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.241000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.241023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.241046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.241068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.241083] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.241095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.241108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2205030 is same with the state(5) to be set 00:27:39.557 [2024-07-13 08:14:31.241138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21fd350 (9): Bad file descriptor 00:27:39.557 [2024-07-13 08:14:31.241183] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.241202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.241215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.241228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.241241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.241253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.241266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.241279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.241291] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2086490 is same with the state(5) to be set 00:27:39.557 [2024-07-13 08:14:31.241336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.241356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.241374] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.241388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.241402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.241414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.241427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:39.557 [2024-07-13 08:14:31.241440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.241452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f370 is same with the state(5) to be set 00:27:39.557 [2024-07-13 08:14:31.241478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c26ee0 (9): Bad file descriptor 00:27:39.557 [2024-07-13 08:14:31.241564] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:39.557 [2024-07-13 08:14:31.241646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.241668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.241687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.241703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.241718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.241732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.241748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.241762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.241777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.241790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.241805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.557 [2024-07-13 08:14:31.241818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.557 [2024-07-13 08:14:31.241833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.241846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.241862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.241886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.241902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.241935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.241952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.241966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.241982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.241995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.242010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.242023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.242039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.242052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.242067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.242081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.242096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.242109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.242125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.242138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.242153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.242166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.242182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.242195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.242210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.242223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.242239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.242252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.242267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.242280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.242299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.242313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.242328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.242341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.242357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.242370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.242385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.242399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.242413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2030d20 is same with the state(5) to be set 00:27:39.558 [2024-07-13 08:14:31.242495] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2030d20 was disconnected and freed. reset controller. 00:27:39.558 [2024-07-13 08:14:31.242561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.242582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.242602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.242617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.242631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2032170 is same with the state(5) to be set 00:27:39.558 [2024-07-13 08:14:31.242712] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2032170 was disconnected and freed. reset controller. 00:27:39.558 [2024-07-13 08:14:31.242990] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2036020 was disconnected and freed. reset controller. 00:27:39.558 [2024-07-13 08:14:31.244233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205f8c0 (9): Bad file descriptor 00:27:39.558 [2024-07-13 08:14:31.244264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205db10 (9): Bad file descriptor 00:27:39.558 [2024-07-13 08:14:31.246153] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:39.558 [2024-07-13 08:14:31.246597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:39.558 [2024-07-13 08:14:31.246627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:39.558 [2024-07-13 08:14:31.246645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:39.558 [2024-07-13 08:14:31.246669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2086490 (9): Bad file descriptor 00:27:39.558 [2024-07-13 08:14:31.246692] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205f370 (9): Bad file descriptor 00:27:39.558 [2024-07-13 08:14:31.246725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:39.558 [2024-07-13 08:14:31.246742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:39.558 [2024-07-13 08:14:31.246765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:39.558 [2024-07-13 08:14:31.246786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:39.558 [2024-07-13 08:14:31.246800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:39.558 [2024-07-13 08:14:31.246813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:39.558 [2024-07-13 08:14:31.246833] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:39.558 [2024-07-13 08:14:31.246855] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:39.558 [2024-07-13 08:14:31.247324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.558 [2024-07-13 08:14:31.247349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.558 [2024-07-13 08:14:31.247363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:39.558 [2024-07-13 08:14:31.247385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f1c40 (9): Bad file descriptor 00:27:39.558 [2024-07-13 08:14:31.247550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.558 [2024-07-13 08:14:31.247579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21fd350 with addr=10.0.0.2, port=4420 00:27:39.558 [2024-07-13 08:14:31.247595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fd350 is same with the state(5) to be set 00:27:39.558 [2024-07-13 08:14:31.248207] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:39.558 [2024-07-13 08:14:31.248287] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:27:39.558 [2024-07-13 08:14:31.248508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.558 [2024-07-13 08:14:31.248536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205f370 with addr=10.0.0.2, port=4420 00:27:39.558 [2024-07-13 08:14:31.248552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f370 is same with the state(5) to be set 00:27:39.558 [2024-07-13 08:14:31.248663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.558 [2024-07-13 08:14:31.248688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2086490 with addr=10.0.0.2, port=4420 00:27:39.558 [2024-07-13 08:14:31.248703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2086490 is same with the state(5) to be set 00:27:39.558 [2024-07-13 08:14:31.248732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21fd350 (9): Bad file descriptor 00:27:39.558 [2024-07-13 08:14:31.248947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.558 [2024-07-13 08:14:31.248976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f1c40 with addr=10.0.0.2, port=4420 00:27:39.558 [2024-07-13 08:14:31.248991] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1c40 is same with the state(5) to be set 00:27:39.558 [2024-07-13 08:14:31.249010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205f370 (9): Bad file descriptor 00:27:39.558 [2024-07-13 08:14:31.249028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2086490 (9): Bad file descriptor 00:27:39.558 [2024-07-13 08:14:31.249044] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:39.558 [2024-07-13 08:14:31.249056] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:39.558 [2024-07-13 08:14:31.249069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:39.558 [2024-07-13 08:14:31.249136] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.558 [2024-07-13 08:14:31.249159] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f1c40 (9): Bad file descriptor 00:27:39.558 [2024-07-13 08:14:31.249176] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:39.558 [2024-07-13 08:14:31.249189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:39.558 [2024-07-13 08:14:31.249202] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:39.558 [2024-07-13 08:14:31.249219] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:39.558 [2024-07-13 08:14:31.249233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:39.558 [2024-07-13 08:14:31.249246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:39.558 [2024-07-13 08:14:31.249289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.558 [2024-07-13 08:14:31.249306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.558 [2024-07-13 08:14:31.249319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:39.558 [2024-07-13 08:14:31.249331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:39.558 [2024-07-13 08:14:31.249343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:39.558 [2024-07-13 08:14:31.249386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.558 [2024-07-13 08:14:31.249989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207f3d0 (9): Bad file descriptor 00:27:39.558 [2024-07-13 08:14:31.250026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55610 (9): Bad file descriptor 00:27:39.558 [2024-07-13 08:14:31.250056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2205030 (9): Bad file descriptor 00:27:39.558 [2024-07-13 08:14:31.250189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.250214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.250243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.250258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.250274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.250288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.250304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.250318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.250333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.250346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.250362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.250380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.250396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.250410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.250426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.250439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.250455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.250468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.250484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.250497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.250512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.250525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.250541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.250554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.250569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.250583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.250598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.250612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.250627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.250641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.250656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.250670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.250685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.250698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.558 [2024-07-13 08:14:31.250713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.558 [2024-07-13 08:14:31.250727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.250742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.250768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.250784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.250798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.250814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.250828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.250844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.250857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.250881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.250897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.250917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.250930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.250946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.250959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.250975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.250989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.251974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.251988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.252001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.252016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.252029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.252044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.252058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.252073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.252086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.252101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.559 [2024-07-13 08:14:31.252114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.559 [2024-07-13 08:14:31.252128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2a0e0 is same with the state(5) to be set 00:27:39.559 [2024-07-13 08:14:31.253411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.559 [2024-07-13 08:14:31.253654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.559 [2024-07-13 08:14:31.253682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26ee0 with addr=10.0.0.2, port=4420 00:27:39.559 [2024-07-13 08:14:31.253699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c26ee0 is same with the state(5) to be set 00:27:39.559 [2024-07-13 08:14:31.254011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c26ee0 (9): Bad file descriptor 00:27:39.559 [2024-07-13 08:14:31.254069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.559 [2024-07-13 08:14:31.254087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.559 [2024-07-13 08:14:31.254102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.559 [2024-07-13 08:14:31.254153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.559 [2024-07-13 08:14:31.256690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:39.559 [2024-07-13 08:14:31.256717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:39.559 [2024-07-13 08:14:31.256922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.559 [2024-07-13 08:14:31.256952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205db10 with addr=10.0.0.2, port=4420 00:27:39.559 [2024-07-13 08:14:31.256968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205db10 is same with the state(5) to be set 00:27:39.559 [2024-07-13 08:14:31.257095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.559 [2024-07-13 08:14:31.257121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205f8c0 with addr=10.0.0.2, port=4420 00:27:39.559 [2024-07-13 08:14:31.257136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f8c0 is same with the state(5) to be set 00:27:39.559 [2024-07-13 08:14:31.257193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205db10 (9): Bad file descriptor 00:27:39.559 [2024-07-13 08:14:31.257220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205f8c0 (9): Bad file descriptor 00:27:39.559 [2024-07-13 08:14:31.257260] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:39.559 [2024-07-13 08:14:31.257289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:39.559 [2024-07-13 08:14:31.257305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:39.559 [2024-07-13 08:14:31.257318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:39.559 [2024-07-13 08:14:31.257334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:39.559 [2024-07-13 08:14:31.257348] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:39.559 [2024-07-13 08:14:31.257360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:39.559 [2024-07-13 08:14:31.257398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.559 [2024-07-13 08:14:31.257415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.559 [2024-07-13 08:14:31.257533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.559 [2024-07-13 08:14:31.257560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21fd350 with addr=10.0.0.2, port=4420 00:27:39.559 [2024-07-13 08:14:31.257575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fd350 is same with the state(5) to be set 00:27:39.559 [2024-07-13 08:14:31.257617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21fd350 (9): Bad file descriptor 00:27:39.559 [2024-07-13 08:14:31.257673] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:39.559 [2024-07-13 08:14:31.257693] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:39.559 [2024-07-13 08:14:31.257706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:39.559 [2024-07-13 08:14:31.257760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.559 [2024-07-13 08:14:31.257810] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:39.559 [2024-07-13 08:14:31.257832] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:39.559 [2024-07-13 08:14:31.258007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.559 [2024-07-13 08:14:31.258034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2086490 with addr=10.0.0.2, port=4420 00:27:39.559 [2024-07-13 08:14:31.258055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2086490 is same with the state(5) to be set 00:27:39.559 [2024-07-13 08:14:31.258182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.559 [2024-07-13 08:14:31.258207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205f370 with addr=10.0.0.2, port=4420 00:27:39.559 [2024-07-13 08:14:31.258222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f370 is same with the state(5) to be set 00:27:39.559 [2024-07-13 08:14:31.258264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2086490 (9): Bad file descriptor 00:27:39.559 [2024-07-13 08:14:31.258286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205f370 (9): Bad file descriptor 00:27:39.559 [2024-07-13 08:14:31.258323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:39.559 [2024-07-13 08:14:31.258339] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:39.559 [2024-07-13 08:14:31.258351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:39.559 [2024-07-13 08:14:31.258369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:39.559 [2024-07-13 08:14:31.258383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:39.559 [2024-07-13 08:14:31.258395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:39.559 [2024-07-13 08:14:31.258433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.559 [2024-07-13 08:14:31.258449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.560 [2024-07-13 08:14:31.258807] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:39.560 [2024-07-13 08:14:31.258975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.560 [2024-07-13 08:14:31.259003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f1c40 with addr=10.0.0.2, port=4420 00:27:39.560 [2024-07-13 08:14:31.259019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1c40 is same with the state(5) to be set 00:27:39.560 [2024-07-13 08:14:31.259061] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f1c40 (9): Bad file descriptor 00:27:39.560 [2024-07-13 08:14:31.259101] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:39.560 [2024-07-13 08:14:31.259117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:39.560 [2024-07-13 08:14:31.259130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:39.560 [2024-07-13 08:14:31.259169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.560 [2024-07-13 08:14:31.260131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.260973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.260993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.261977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.261992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.262006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.262021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.262034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.262048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2033620 is same with the state(5) to be set 00:27:39.560 [2024-07-13 08:14:31.263326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.560 [2024-07-13 08:14:31.263350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.560 [2024-07-13 08:14:31.263374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.263390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.263405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.263419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.263434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.263447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.263462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.263476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.263492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.263505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.263520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.263533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.263549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.263562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.263577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.263590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.263605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.263618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.263633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.263646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.263661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.263674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.263689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.263702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.263717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.263734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.263750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.263763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.263778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.263792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.263807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.263820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.263835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.263848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.263863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.263884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.263900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.263914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.263931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.263945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.263961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.263975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.263990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.264980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.264995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.265008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.265024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.265037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.265053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.265066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.265081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.265094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.265111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.265124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.265140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.265153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.265169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.265182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.265197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.265217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.265232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2034b20 is same with the state(5) to be set 00:27:39.561 [2024-07-13 08:14:31.266470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.266493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.266514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.266529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.266545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.266560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.266576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.561 [2024-07-13 08:14:31.266589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.561 [2024-07-13 08:14:31.266604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.266618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.266633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.266646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.266661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.266675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.266690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.266704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.266719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.266733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.266749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.266762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.266777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.266791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.266805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.266819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.266839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.266853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.266880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.266897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.266912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.266926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.266941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.266954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.266969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.266982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.266997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.267978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.267991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.268006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.268019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.268034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.268048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.268063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.268080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.268096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.268109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.268124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.268138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.268153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.268166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.268181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.268194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.268209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.268223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.268239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.268252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.268267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.268280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.268295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.268308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.268324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.268337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.268352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.268365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.268380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.268393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.268409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.268422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.268444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.268458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.268473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:39.562 [2024-07-13 08:14:31.268486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:39.562 [2024-07-13 08:14:31.268500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20373a0 is same with the state(5) to be set 00:27:39.562 [2024-07-13 08:14:31.270133] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:27:39.562 [2024-07-13 08:14:31.270167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:27:39.821 task offset: 26496 on job bdev=Nvme2n1 fails 00:27:39.821 00:27:39.821 Latency(us) 00:27:39.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.821 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.821 Job: Nvme1n1 ended in about 0.90 seconds with error 00:27:39.821 Verification LBA range: start 0x0 length 0x400 00:27:39.821 Nvme1n1 : 0.90 141.61 8.85 70.81 0.00 297844.43 21651.15 257872.02 00:27:39.821 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.821 Job: Nvme2n1 ended in about 0.88 seconds with error 00:27:39.821 Verification LBA range: start 0x0 length 0x400 00:27:39.821 Nvme2n1 : 0.88 217.50 13.59 72.50 0.00 213395.58 5437.06 250104.79 00:27:39.821 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.821 Job: Nvme3n1 ended in about 0.90 seconds with error 00:27:39.821 Verification LBA range: start 0x0 length 0x400 00:27:39.821 Nvme3n1 : 0.90 212.09 13.26 27.91 0.00 250712.17 20388.98 240784.12 00:27:39.821 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.821 Job: Nvme4n1 ended in about 0.88 seconds with error 00:27:39.821 Verification LBA range: start 0x0 length 0x400 00:27:39.821 Nvme4n1 : 0.88 217.21 13.58 72.40 0.00 204355.18 3980.71 264085.81 00:27:39.821 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.821 Job: Nvme5n1 ended in about 0.90 seconds with error 00:27:39.821 Verification LBA range: start 0x0 length 0x400 00:27:39.821 Nvme5n1 : 0.90 211.89 13.24 2.23 0.00 267755.01 17379.18 257872.02 00:27:39.821 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.821 Job: Nvme6n1 ended in about 0.91 seconds with error 00:27:39.821 Verification LBA range: start 0x0 length 0x400 00:27:39.821 Nvme6n1 : 0.91 140.08 8.75 70.04 0.00 270549.90 19709.35 262532.36 00:27:39.822 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.822 Job: Nvme7n1 ended in about 0.92 seconds with error 00:27:39.822 Verification LBA range: start 0x0 length 0x400 00:27:39.822 Nvme7n1 : 0.92 143.96 9.00 69.80 0.00 260271.54 20291.89 259425.47 00:27:39.822 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.822 Verification LBA range: start 0x0 length 0x400 00:27:39.822 Nvme8n1 : 0.89 216.21 13.51 0.00 0.00 250153.34 21165.70 259425.47 00:27:39.822 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.822 Job: Nvme9n1 ended in about 0.92 seconds with error 00:27:39.822 Verification LBA range: start 0x0 length 0x400 00:27:39.822 Nvme9n1 : 0.92 139.10 8.69 69.55 0.00 254964.37 20388.98 265639.25 00:27:39.822 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:39.822 Job: Nvme10n1 ended in about 0.89 seconds with error 00:27:39.822 Verification LBA range: start 0x0 length 0x400 00:27:39.822 Nvme10n1 : 0.89 143.05 8.94 71.52 0.00 240732.29 20874.43 292047.83 00:27:39.822 =================================================================================================================== 00:27:39.822 Total : 1782.69 111.42 526.76 0.00 248484.05 3980.71 292047.83 00:27:39.822 [2024-07-13 08:14:31.298728] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:27:39.822 [2024-07-13 08:14:31.298817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:27:39.822 [2024-07-13 08:14:31.299366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.822 [2024-07-13 08:14:31.299405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x207f3d0 with addr=10.0.0.2, port=4420 00:27:39.822 [2024-07-13 08:14:31.299427] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x207f3d0 is same with the state(5) to be set 00:27:39.822 [2024-07-13 08:14:31.299554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.822 [2024-07-13 08:14:31.299582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b55610 with addr=10.0.0.2, port=4420 00:27:39.822 [2024-07-13 08:14:31.299598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b55610 is same with the state(5) to be set 00:27:39.822 [2024-07-13 08:14:31.299730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.822 [2024-07-13 08:14:31.299755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2205030 with addr=10.0.0.2, port=4420 00:27:39.822 [2024-07-13 08:14:31.299771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2205030 is same with the state(5) to be set 00:27:39.822 [2024-07-13 08:14:31.299802] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:39.822 [2024-07-13 08:14:31.299825] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:39.822 [2024-07-13 08:14:31.299843] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:39.822 [2024-07-13 08:14:31.299860] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:39.822 [2024-07-13 08:14:31.299887] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:39.822 [2024-07-13 08:14:31.299906] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:39.822 [2024-07-13 08:14:31.299924] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:39.822 [2024-07-13 08:14:31.300728] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:39.822 [2024-07-13 08:14:31.300756] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:27:39.822 [2024-07-13 08:14:31.300774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:27:39.822 [2024-07-13 08:14:31.300790] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:27:39.822 [2024-07-13 08:14:31.300805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:27:39.822 [2024-07-13 08:14:31.300821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:27:39.822 [2024-07-13 08:14:31.300837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:27:39.822 [2024-07-13 08:14:31.300948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207f3d0 (9): Bad file descriptor 00:27:39.822 [2024-07-13 08:14:31.300980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b55610 (9): Bad file descriptor 00:27:39.822 [2024-07-13 08:14:31.300999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2205030 (9): Bad file descriptor 00:27:39.822 [2024-07-13 08:14:31.301492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.822 [2024-07-13 08:14:31.301522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c26ee0 with addr=10.0.0.2, port=4420 00:27:39.822 [2024-07-13 08:14:31.301539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c26ee0 is same with the state(5) to be set 00:27:39.822 [2024-07-13 08:14:31.301666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.822 [2024-07-13 08:14:31.301692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205f8c0 with addr=10.0.0.2, port=4420 00:27:39.822 [2024-07-13 08:14:31.301707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f8c0 is same with the state(5) to be set 00:27:39.822 [2024-07-13 08:14:31.301895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.822 [2024-07-13 08:14:31.301921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205db10 with addr=10.0.0.2, port=4420 00:27:39.822 [2024-07-13 08:14:31.301937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205db10 is same with the state(5) to be set 00:27:39.822 [2024-07-13 08:14:31.302062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.822 [2024-07-13 08:14:31.302090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21fd350 with addr=10.0.0.2, port=4420 00:27:39.822 [2024-07-13 08:14:31.302105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21fd350 is same with the state(5) to be set 00:27:39.822 [2024-07-13 08:14:31.302215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.822 [2024-07-13 08:14:31.302241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205f370 with addr=10.0.0.2, port=4420 00:27:39.822 [2024-07-13 08:14:31.302257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x205f370 is same with the state(5) to be set 00:27:39.822 [2024-07-13 08:14:31.302403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.822 [2024-07-13 08:14:31.302428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2086490 with addr=10.0.0.2, port=4420 00:27:39.822 [2024-07-13 08:14:31.302443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2086490 is same with the state(5) to be set 00:27:39.822 [2024-07-13 08:14:31.302579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:39.822 [2024-07-13 08:14:31.302604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21f1c40 with addr=10.0.0.2, port=4420 00:27:39.822 [2024-07-13 08:14:31.302620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f1c40 is same with the state(5) to be set 00:27:39.822 [2024-07-13 08:14:31.302635] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:27:39.822 [2024-07-13 08:14:31.302648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:27:39.822 [2024-07-13 08:14:31.302664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:27:39.822 [2024-07-13 08:14:31.302684] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:27:39.822 [2024-07-13 08:14:31.302699] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:27:39.822 [2024-07-13 08:14:31.302711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:27:39.822 [2024-07-13 08:14:31.302727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:27:39.822 [2024-07-13 08:14:31.302740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:27:39.822 [2024-07-13 08:14:31.302757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:27:39.822 [2024-07-13 08:14:31.302829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.822 [2024-07-13 08:14:31.302851] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.822 [2024-07-13 08:14:31.302863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.822 [2024-07-13 08:14:31.302888] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c26ee0 (9): Bad file descriptor 00:27:39.822 [2024-07-13 08:14:31.302907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205f8c0 (9): Bad file descriptor 00:27:39.822 [2024-07-13 08:14:31.302924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205db10 (9): Bad file descriptor 00:27:39.822 [2024-07-13 08:14:31.302941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21fd350 (9): Bad file descriptor 00:27:39.822 [2024-07-13 08:14:31.302958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x205f370 (9): Bad file descriptor 00:27:39.822 [2024-07-13 08:14:31.302974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2086490 (9): Bad file descriptor 00:27:39.822 [2024-07-13 08:14:31.302991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21f1c40 (9): Bad file descriptor 00:27:39.822 [2024-07-13 08:14:31.303030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:39.822 [2024-07-13 08:14:31.303048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:39.822 [2024-07-13 08:14:31.303062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:39.822 [2024-07-13 08:14:31.303077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:27:39.822 [2024-07-13 08:14:31.303091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:27:39.822 [2024-07-13 08:14:31.303103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:27:39.822 [2024-07-13 08:14:31.303118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:27:39.822 [2024-07-13 08:14:31.303130] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:27:39.822 [2024-07-13 08:14:31.303143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:27:39.822 [2024-07-13 08:14:31.303158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:27:39.822 [2024-07-13 08:14:31.303171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:27:39.822 [2024-07-13 08:14:31.303184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:27:39.822 [2024-07-13 08:14:31.303199] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:27:39.822 [2024-07-13 08:14:31.303212] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:27:39.822 [2024-07-13 08:14:31.303225] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:27:39.822 [2024-07-13 08:14:31.303240] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:27:39.822 [2024-07-13 08:14:31.303253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:27:39.822 [2024-07-13 08:14:31.303265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:27:39.822 [2024-07-13 08:14:31.303280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:27:39.823 [2024-07-13 08:14:31.303299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:27:39.823 [2024-07-13 08:14:31.303313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:27:39.823 [2024-07-13 08:14:31.303350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.823 [2024-07-13 08:14:31.303367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.823 [2024-07-13 08:14:31.303379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.823 [2024-07-13 08:14:31.303391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.823 [2024-07-13 08:14:31.303402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.823 [2024-07-13 08:14:31.303413] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:39.823 [2024-07-13 08:14:31.303452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:40.082 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:27:40.082 08:14:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2042506 00:27:41.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2042506) - No such process 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:41.462 rmmod nvme_tcp 00:27:41.462 rmmod nvme_fabrics 00:27:41.462 rmmod nvme_keyring 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:41.462 08:14:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.365 08:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:43.365 00:27:43.365 real 0m7.587s 00:27:43.365 user 0m18.550s 00:27:43.365 sys 0m1.533s 00:27:43.365 08:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:43.365 08:14:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:27:43.365 ************************************ 00:27:43.365 END TEST nvmf_shutdown_tc3 00:27:43.365 ************************************ 00:27:43.365 08:14:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:43.365 08:14:34 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:27:43.365 00:27:43.365 real 0m27.371s 00:27:43.365 user 1m16.531s 00:27:43.365 sys 0m6.434s 00:27:43.365 08:14:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:43.365 08:14:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:43.365 ************************************ 00:27:43.365 END TEST nvmf_shutdown 00:27:43.365 ************************************ 00:27:43.365 08:14:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:43.365 08:14:34 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:27:43.365 08:14:34 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:43.365 08:14:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:43.365 08:14:34 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:27:43.365 08:14:34 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:43.365 08:14:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:43.365 08:14:34 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:27:43.365 08:14:34 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:43.365 08:14:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:43.365 08:14:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:43.365 08:14:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:43.365 ************************************ 00:27:43.365 START TEST nvmf_multicontroller 00:27:43.365 ************************************ 00:27:43.365 08:14:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:43.365 * Looking for test storage... 00:27:43.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:43.365 08:14:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.365 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:43.365 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.365 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.365 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.365 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.365 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.365 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.365 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.365 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.365 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:27:43.366 08:14:35 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:45.903 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:45.903 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.903 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:45.904 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:45.904 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:45.904 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.904 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:27:45.904 00:27:45.904 --- 10.0.0.2 ping statistics --- 00:27:45.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.904 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.904 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.904 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:27:45.904 00:27:45.904 --- 10.0.0.1 ping statistics --- 00:27:45.904 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.904 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2044990 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2044990 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2044990 ']' 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:45.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:45.904 [2024-07-13 08:14:37.299805] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:45.904 [2024-07-13 08:14:37.299922] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:45.904 EAL: No free 2048 kB hugepages reported on node 1 00:27:45.904 [2024-07-13 08:14:37.365244] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:45.904 [2024-07-13 08:14:37.454758] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:45.904 [2024-07-13 08:14:37.454817] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:45.904 [2024-07-13 08:14:37.454844] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:45.904 [2024-07-13 08:14:37.454855] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:45.904 [2024-07-13 08:14:37.454871] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:45.904 [2024-07-13 08:14:37.454955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:45.904 [2024-07-13 08:14:37.455021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:45.904 [2024-07-13 08:14:37.455024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:45.904 [2024-07-13 08:14:37.600627] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.904 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:46.164 Malloc0 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:46.164 [2024-07-13 08:14:37.670399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:46.164 [2024-07-13 08:14:37.678292] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:46.164 Malloc1 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2045040 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2045040 /var/tmp/bdevperf.sock 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 2045040 ']' 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:46.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:46.164 08:14:37 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:46.422 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:46.422 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:27:46.422 08:14:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:46.422 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.422 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:46.682 NVMe0n1 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.682 1 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:46.682 request: 00:27:46.682 { 00:27:46.682 "name": "NVMe0", 00:27:46.682 "trtype": "tcp", 00:27:46.682 "traddr": "10.0.0.2", 00:27:46.682 "adrfam": "ipv4", 00:27:46.682 "trsvcid": "4420", 00:27:46.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.682 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:46.682 "hostaddr": "10.0.0.2", 00:27:46.682 "hostsvcid": "60000", 00:27:46.682 "prchk_reftag": false, 00:27:46.682 "prchk_guard": false, 00:27:46.682 "hdgst": false, 00:27:46.682 "ddgst": false, 00:27:46.682 "method": "bdev_nvme_attach_controller", 00:27:46.682 "req_id": 1 00:27:46.682 } 00:27:46.682 Got JSON-RPC error response 00:27:46.682 response: 00:27:46.682 { 00:27:46.682 "code": -114, 00:27:46.682 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:46.682 } 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:46.682 request: 00:27:46.682 { 00:27:46.682 "name": "NVMe0", 00:27:46.682 "trtype": "tcp", 00:27:46.682 "traddr": "10.0.0.2", 00:27:46.682 "adrfam": "ipv4", 00:27:46.682 "trsvcid": "4420", 00:27:46.682 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:46.682 "hostaddr": "10.0.0.2", 00:27:46.682 "hostsvcid": "60000", 00:27:46.682 "prchk_reftag": false, 00:27:46.682 "prchk_guard": false, 00:27:46.682 "hdgst": false, 00:27:46.682 "ddgst": false, 00:27:46.682 "method": "bdev_nvme_attach_controller", 00:27:46.682 "req_id": 1 00:27:46.682 } 00:27:46.682 Got JSON-RPC error response 00:27:46.682 response: 00:27:46.682 { 00:27:46.682 "code": -114, 00:27:46.682 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:46.682 } 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:46.682 request: 00:27:46.682 { 00:27:46.682 "name": "NVMe0", 00:27:46.682 "trtype": "tcp", 00:27:46.682 "traddr": "10.0.0.2", 00:27:46.682 "adrfam": "ipv4", 00:27:46.682 "trsvcid": "4420", 00:27:46.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.682 "hostaddr": "10.0.0.2", 00:27:46.682 "hostsvcid": "60000", 00:27:46.682 "prchk_reftag": false, 00:27:46.682 "prchk_guard": false, 00:27:46.682 "hdgst": false, 00:27:46.682 "ddgst": false, 00:27:46.682 "multipath": "disable", 00:27:46.682 "method": "bdev_nvme_attach_controller", 00:27:46.682 "req_id": 1 00:27:46.682 } 00:27:46.682 Got JSON-RPC error response 00:27:46.682 response: 00:27:46.682 { 00:27:46.682 "code": -114, 00:27:46.682 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:27:46.682 } 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:46.682 request: 00:27:46.682 { 00:27:46.682 "name": "NVMe0", 00:27:46.682 "trtype": "tcp", 00:27:46.682 "traddr": "10.0.0.2", 00:27:46.682 "adrfam": "ipv4", 00:27:46.682 "trsvcid": "4420", 00:27:46.682 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.682 "hostaddr": "10.0.0.2", 00:27:46.682 "hostsvcid": "60000", 00:27:46.682 "prchk_reftag": false, 00:27:46.682 "prchk_guard": false, 00:27:46.682 "hdgst": false, 00:27:46.682 "ddgst": false, 00:27:46.682 "multipath": "failover", 00:27:46.682 "method": "bdev_nvme_attach_controller", 00:27:46.682 "req_id": 1 00:27:46.682 } 00:27:46.682 Got JSON-RPC error response 00:27:46.682 response: 00:27:46.682 { 00:27:46.682 "code": -114, 00:27:46.682 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:27:46.682 } 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:27:46.682 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:46.683 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:46.683 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:46.683 08:14:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:46.683 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.683 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:46.943 00:27:46.943 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.943 08:14:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:46.943 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.943 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:46.943 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.943 08:14:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:27:46.943 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.943 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:46.943 00:27:46.943 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.943 08:14:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:46.943 08:14:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:27:46.943 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:46.943 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:46.943 08:14:38 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:46.943 08:14:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:27:46.943 08:14:38 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:48.322 0 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2045040 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2045040 ']' 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2045040 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2045040 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2045040' 00:27:48.323 killing process with pid 2045040 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2045040 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2045040 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:27:48.323 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:48.323 [2024-07-13 08:14:37.783623] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:48.323 [2024-07-13 08:14:37.783725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2045040 ] 00:27:48.323 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.323 [2024-07-13 08:14:37.845538] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.323 [2024-07-13 08:14:37.933569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:48.323 [2024-07-13 08:14:38.590919] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 3cbf6078-e776-4929-859d-5a36efbd607c already exists 00:27:48.323 [2024-07-13 08:14:38.590959] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:3cbf6078-e776-4929-859d-5a36efbd607c alias for bdev NVMe1n1 00:27:48.323 [2024-07-13 08:14:38.590983] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:48.323 Running I/O for 1 seconds... 00:27:48.323 00:27:48.323 Latency(us) 00:27:48.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.323 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:48.323 NVMe0n1 : 1.01 17095.96 66.78 0.00 0.00 7454.80 6990.51 16990.81 00:27:48.323 =================================================================================================================== 00:27:48.323 Total : 17095.96 66.78 0.00 0.00 7454.80 6990.51 16990.81 00:27:48.323 Received shutdown signal, test time was about 1.000000 seconds 00:27:48.323 00:27:48.323 Latency(us) 00:27:48.323 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:48.323 =================================================================================================================== 00:27:48.323 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:48.323 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:48.323 08:14:39 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:48.323 rmmod nvme_tcp 00:27:48.323 rmmod nvme_fabrics 00:27:48.323 rmmod nvme_keyring 00:27:48.323 08:14:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:48.323 08:14:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:27:48.323 08:14:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:27:48.323 08:14:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2044990 ']' 00:27:48.323 08:14:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2044990 00:27:48.323 08:14:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 2044990 ']' 00:27:48.323 08:14:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 2044990 00:27:48.323 08:14:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:27:48.323 08:14:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:48.323 08:14:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2044990 00:27:48.581 08:14:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:48.581 08:14:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:48.581 08:14:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2044990' 00:27:48.581 killing process with pid 2044990 00:27:48.581 08:14:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 2044990 00:27:48.581 08:14:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 2044990 00:27:48.838 08:14:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:48.838 08:14:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:48.838 08:14:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:48.839 08:14:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:48.839 08:14:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:48.839 08:14:40 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.839 08:14:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:48.839 08:14:40 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.744 08:14:42 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:50.744 00:27:50.744 real 0m7.366s 00:27:50.744 user 0m11.517s 00:27:50.744 sys 0m2.270s 00:27:50.744 08:14:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:50.744 08:14:42 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:50.744 ************************************ 00:27:50.744 END TEST nvmf_multicontroller 00:27:50.744 ************************************ 00:27:50.744 08:14:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:50.744 08:14:42 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:50.744 08:14:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:50.744 08:14:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:50.744 08:14:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:50.744 ************************************ 00:27:50.744 START TEST nvmf_aer 00:27:50.744 ************************************ 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:27:50.744 * Looking for test storage... 00:27:50.744 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:50.744 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:51.003 08:14:42 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:27:51.003 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:51.003 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.003 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:51.003 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:51.003 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:51.003 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.003 08:14:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:51.003 08:14:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.003 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:51.003 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:51.003 08:14:42 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:27:51.003 08:14:42 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:52.904 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:52.904 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:27:52.904 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:52.904 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:52.905 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:52.905 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:52.905 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:52.905 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:52.905 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:53.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:53.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:27:53.165 00:27:53.165 --- 10.0.0.2 ping statistics --- 00:27:53.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.165 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:53.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:53.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:27:53.165 00:27:53.165 --- 10.0.0.1 ping statistics --- 00:27:53.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:53.165 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2047252 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2047252 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 2047252 ']' 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:53.165 08:14:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.166 08:14:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:53.166 08:14:44 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:53.166 [2024-07-13 08:14:44.742099] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:53.166 [2024-07-13 08:14:44.742191] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:53.166 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.166 [2024-07-13 08:14:44.810501] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:53.424 [2024-07-13 08:14:44.900535] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.424 [2024-07-13 08:14:44.900597] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.424 [2024-07-13 08:14:44.900611] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:53.424 [2024-07-13 08:14:44.900626] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:53.424 [2024-07-13 08:14:44.900636] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.424 [2024-07-13 08:14:44.900707] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.424 [2024-07-13 08:14:44.903889] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:53.424 [2024-07-13 08:14:44.903958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:53.424 [2024-07-13 08:14:44.903962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:53.424 [2024-07-13 08:14:45.060782] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:53.424 Malloc0 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:53.424 [2024-07-13 08:14:45.114213] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:53.424 [ 00:27:53.424 { 00:27:53.424 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:53.424 "subtype": "Discovery", 00:27:53.424 "listen_addresses": [], 00:27:53.424 "allow_any_host": true, 00:27:53.424 "hosts": [] 00:27:53.424 }, 00:27:53.424 { 00:27:53.424 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.424 "subtype": "NVMe", 00:27:53.424 "listen_addresses": [ 00:27:53.424 { 00:27:53.424 "trtype": "TCP", 00:27:53.424 "adrfam": "IPv4", 00:27:53.424 "traddr": "10.0.0.2", 00:27:53.424 "trsvcid": "4420" 00:27:53.424 } 00:27:53.424 ], 00:27:53.424 "allow_any_host": true, 00:27:53.424 "hosts": [], 00:27:53.424 "serial_number": "SPDK00000000000001", 00:27:53.424 "model_number": "SPDK bdev Controller", 00:27:53.424 "max_namespaces": 2, 00:27:53.424 "min_cntlid": 1, 00:27:53.424 "max_cntlid": 65519, 00:27:53.424 "namespaces": [ 00:27:53.424 { 00:27:53.424 "nsid": 1, 00:27:53.424 "bdev_name": "Malloc0", 00:27:53.424 "name": "Malloc0", 00:27:53.424 "nguid": "4475A11C96C8419BAFC6F77B9BF0D963", 00:27:53.424 "uuid": "4475a11c-96c8-419b-afc6-f77b9bf0d963" 00:27:53.424 } 00:27:53.424 ] 00:27:53.424 } 00:27:53.424 ] 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:27:53.424 08:14:45 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:27:53.425 08:14:45 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=2047289 00:27:53.425 08:14:45 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:27:53.425 08:14:45 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:27:53.425 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:27:53.425 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:53.425 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:27:53.425 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:27:53.425 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:53.684 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.684 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:53.684 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:27:53.684 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:27:53.684 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:53.684 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:53.684 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:27:53.684 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:27:53.684 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:53.943 Malloc1 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:53.943 [ 00:27:53.943 { 00:27:53.943 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:27:53.943 "subtype": "Discovery", 00:27:53.943 "listen_addresses": [], 00:27:53.943 "allow_any_host": true, 00:27:53.943 "hosts": [] 00:27:53.943 }, 00:27:53.943 { 00:27:53.943 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.943 "subtype": "NVMe", 00:27:53.943 "listen_addresses": [ 00:27:53.943 { 00:27:53.943 "trtype": "TCP", 00:27:53.943 "adrfam": "IPv4", 00:27:53.943 "traddr": "10.0.0.2", 00:27:53.943 "trsvcid": "4420" 00:27:53.943 } 00:27:53.943 ], 00:27:53.943 "allow_any_host": true, 00:27:53.943 "hosts": [], 00:27:53.943 "serial_number": "SPDK00000000000001", 00:27:53.943 "model_number": "SPDK bdev Controller", 00:27:53.943 "max_namespaces": 2, 00:27:53.943 "min_cntlid": 1, 00:27:53.943 "max_cntlid": 65519, 00:27:53.943 "namespaces": [ 00:27:53.943 { 00:27:53.943 "nsid": 1, 00:27:53.943 "bdev_name": "Malloc0", 00:27:53.943 "name": "Malloc0", 00:27:53.943 "nguid": "4475A11C96C8419BAFC6F77B9BF0D963", 00:27:53.943 "uuid": "4475a11c-96c8-419b-afc6-f77b9bf0d963" 00:27:53.943 }, 00:27:53.943 { 00:27:53.943 "nsid": 2, 00:27:53.943 "bdev_name": "Malloc1", 00:27:53.943 "name": "Malloc1", 00:27:53.943 "nguid": "CED06199C74549B5873FE7D1935E8AEC", 00:27:53.943 "uuid": "ced06199-c745-49b5-873f-e7d1935e8aec" 00:27:53.943 } 00:27:53.943 ] 00:27:53.943 } 00:27:53.943 ] 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 2047289 00:27:53.943 Asynchronous Event Request test 00:27:53.943 Attaching to 10.0.0.2 00:27:53.943 Attached to 10.0.0.2 00:27:53.943 Registering asynchronous event callbacks... 00:27:53.943 Starting namespace attribute notice tests for all controllers... 00:27:53.943 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:27:53.943 aer_cb - Changed Namespace 00:27:53.943 Cleaning up... 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:53.943 rmmod nvme_tcp 00:27:53.943 rmmod nvme_fabrics 00:27:53.943 rmmod nvme_keyring 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2047252 ']' 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2047252 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 2047252 ']' 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 2047252 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2047252 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2047252' 00:27:53.943 killing process with pid 2047252 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 2047252 00:27:53.943 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 2047252 00:27:54.201 08:14:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:54.201 08:14:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:54.201 08:14:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:54.201 08:14:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:54.201 08:14:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:54.201 08:14:45 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:54.201 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:54.201 08:14:45 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.740 08:14:47 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:56.740 00:27:56.740 real 0m5.509s 00:27:56.740 user 0m4.492s 00:27:56.740 sys 0m2.025s 00:27:56.740 08:14:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:56.740 08:14:47 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:27:56.740 ************************************ 00:27:56.740 END TEST nvmf_aer 00:27:56.740 ************************************ 00:27:56.740 08:14:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:56.740 08:14:47 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:56.740 08:14:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:56.740 08:14:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:56.740 08:14:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:56.740 ************************************ 00:27:56.740 START TEST nvmf_async_init 00:27:56.740 ************************************ 00:27:56.740 08:14:47 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:27:56.740 * Looking for test storage... 00:27:56.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.740 08:14:48 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=a195c4dbb83e4dc1befca7b3cb97b163 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:27:56.741 08:14:48 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:58.647 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:58.647 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:58.647 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:58.647 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:58.648 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:58.648 08:14:49 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:58.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:58.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:27:58.648 00:27:58.648 --- 10.0.0.2 ping statistics --- 00:27:58.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.648 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:58.648 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:58.648 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:27:58.648 00:27:58.648 --- 10.0.0.1 ping statistics --- 00:27:58.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:58.648 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2049337 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2049337 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 2049337 ']' 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:58.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:58.648 [2024-07-13 08:14:50.085061] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:27:58.648 [2024-07-13 08:14:50.085138] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:58.648 EAL: No free 2048 kB hugepages reported on node 1 00:27:58.648 [2024-07-13 08:14:50.146503] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:58.648 [2024-07-13 08:14:50.228528] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:58.648 [2024-07-13 08:14:50.228599] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:58.648 [2024-07-13 08:14:50.228623] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:58.648 [2024-07-13 08:14:50.228634] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:58.648 [2024-07-13 08:14:50.228644] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:58.648 [2024-07-13 08:14:50.228671] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:58.648 [2024-07-13 08:14:50.357447] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:58.648 null0 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.648 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:58.908 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.908 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g a195c4dbb83e4dc1befca7b3cb97b163 00:27:58.908 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.908 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:58.908 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.908 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:58.908 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.908 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:58.908 [2024-07-13 08:14:50.397703] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:58.908 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.908 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:27:58.908 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.908 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:58.908 nvme0n1 00:27:58.908 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:58.908 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:58.908 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:58.908 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:58.908 [ 00:27:58.908 { 00:27:58.908 "name": "nvme0n1", 00:27:58.908 "aliases": [ 00:27:58.908 "a195c4db-b83e-4dc1-befc-a7b3cb97b163" 00:27:58.908 ], 00:27:58.908 "product_name": "NVMe disk", 00:27:58.908 "block_size": 512, 00:27:58.908 "num_blocks": 2097152, 00:27:58.908 "uuid": "a195c4db-b83e-4dc1-befc-a7b3cb97b163", 00:27:58.908 "assigned_rate_limits": { 00:27:58.909 "rw_ios_per_sec": 0, 00:27:58.909 "rw_mbytes_per_sec": 0, 00:27:58.909 "r_mbytes_per_sec": 0, 00:27:58.909 "w_mbytes_per_sec": 0 00:27:58.909 }, 00:27:58.909 "claimed": false, 00:27:58.909 "zoned": false, 00:27:58.909 "supported_io_types": { 00:27:58.909 "read": true, 00:27:58.909 "write": true, 00:27:59.168 "unmap": false, 00:27:59.168 "flush": true, 00:27:59.168 "reset": true, 00:27:59.168 "nvme_admin": true, 00:27:59.168 "nvme_io": true, 00:27:59.168 "nvme_io_md": false, 00:27:59.168 "write_zeroes": true, 00:27:59.168 "zcopy": false, 00:27:59.168 "get_zone_info": false, 00:27:59.168 "zone_management": false, 00:27:59.168 "zone_append": false, 00:27:59.168 "compare": true, 00:27:59.168 "compare_and_write": true, 00:27:59.168 "abort": true, 00:27:59.168 "seek_hole": false, 00:27:59.168 "seek_data": false, 00:27:59.168 "copy": true, 00:27:59.168 "nvme_iov_md": false 00:27:59.168 }, 00:27:59.168 "memory_domains": [ 00:27:59.168 { 00:27:59.168 "dma_device_id": "system", 00:27:59.168 "dma_device_type": 1 00:27:59.168 } 00:27:59.168 ], 00:27:59.168 "driver_specific": { 00:27:59.168 "nvme": [ 00:27:59.168 { 00:27:59.168 "trid": { 00:27:59.168 "trtype": "TCP", 00:27:59.168 "adrfam": "IPv4", 00:27:59.168 "traddr": "10.0.0.2", 00:27:59.168 "trsvcid": "4420", 00:27:59.168 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:59.168 }, 00:27:59.168 "ctrlr_data": { 00:27:59.168 "cntlid": 1, 00:27:59.168 "vendor_id": "0x8086", 00:27:59.168 "model_number": "SPDK bdev Controller", 00:27:59.168 "serial_number": "00000000000000000000", 00:27:59.168 "firmware_revision": "24.09", 00:27:59.168 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:59.168 "oacs": { 00:27:59.168 "security": 0, 00:27:59.168 "format": 0, 00:27:59.168 "firmware": 0, 00:27:59.168 "ns_manage": 0 00:27:59.168 }, 00:27:59.168 "multi_ctrlr": true, 00:27:59.168 "ana_reporting": false 00:27:59.168 }, 00:27:59.168 "vs": { 00:27:59.168 "nvme_version": "1.3" 00:27:59.168 }, 00:27:59.168 "ns_data": { 00:27:59.168 "id": 1, 00:27:59.168 "can_share": true 00:27:59.168 } 00:27:59.168 } 00:27:59.168 ], 00:27:59.168 "mp_policy": "active_passive" 00:27:59.168 } 00:27:59.168 } 00:27:59.168 ] 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:59.168 [2024-07-13 08:14:50.651007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:59.168 [2024-07-13 08:14:50.651084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa8ac40 (9): Bad file descriptor 00:27:59.168 [2024-07-13 08:14:50.824022] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:59.168 [ 00:27:59.168 { 00:27:59.168 "name": "nvme0n1", 00:27:59.168 "aliases": [ 00:27:59.168 "a195c4db-b83e-4dc1-befc-a7b3cb97b163" 00:27:59.168 ], 00:27:59.168 "product_name": "NVMe disk", 00:27:59.168 "block_size": 512, 00:27:59.168 "num_blocks": 2097152, 00:27:59.168 "uuid": "a195c4db-b83e-4dc1-befc-a7b3cb97b163", 00:27:59.168 "assigned_rate_limits": { 00:27:59.168 "rw_ios_per_sec": 0, 00:27:59.168 "rw_mbytes_per_sec": 0, 00:27:59.168 "r_mbytes_per_sec": 0, 00:27:59.168 "w_mbytes_per_sec": 0 00:27:59.168 }, 00:27:59.168 "claimed": false, 00:27:59.168 "zoned": false, 00:27:59.168 "supported_io_types": { 00:27:59.168 "read": true, 00:27:59.168 "write": true, 00:27:59.168 "unmap": false, 00:27:59.168 "flush": true, 00:27:59.168 "reset": true, 00:27:59.168 "nvme_admin": true, 00:27:59.168 "nvme_io": true, 00:27:59.168 "nvme_io_md": false, 00:27:59.168 "write_zeroes": true, 00:27:59.168 "zcopy": false, 00:27:59.168 "get_zone_info": false, 00:27:59.168 "zone_management": false, 00:27:59.168 "zone_append": false, 00:27:59.168 "compare": true, 00:27:59.168 "compare_and_write": true, 00:27:59.168 "abort": true, 00:27:59.168 "seek_hole": false, 00:27:59.168 "seek_data": false, 00:27:59.168 "copy": true, 00:27:59.168 "nvme_iov_md": false 00:27:59.168 }, 00:27:59.168 "memory_domains": [ 00:27:59.168 { 00:27:59.168 "dma_device_id": "system", 00:27:59.168 "dma_device_type": 1 00:27:59.168 } 00:27:59.168 ], 00:27:59.168 "driver_specific": { 00:27:59.168 "nvme": [ 00:27:59.168 { 00:27:59.168 "trid": { 00:27:59.168 "trtype": "TCP", 00:27:59.168 "adrfam": "IPv4", 00:27:59.168 "traddr": "10.0.0.2", 00:27:59.168 "trsvcid": "4420", 00:27:59.168 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:59.168 }, 00:27:59.168 "ctrlr_data": { 00:27:59.168 "cntlid": 2, 00:27:59.168 "vendor_id": "0x8086", 00:27:59.168 "model_number": "SPDK bdev Controller", 00:27:59.168 "serial_number": "00000000000000000000", 00:27:59.168 "firmware_revision": "24.09", 00:27:59.168 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:59.168 "oacs": { 00:27:59.168 "security": 0, 00:27:59.168 "format": 0, 00:27:59.168 "firmware": 0, 00:27:59.168 "ns_manage": 0 00:27:59.168 }, 00:27:59.168 "multi_ctrlr": true, 00:27:59.168 "ana_reporting": false 00:27:59.168 }, 00:27:59.168 "vs": { 00:27:59.168 "nvme_version": "1.3" 00:27:59.168 }, 00:27:59.168 "ns_data": { 00:27:59.168 "id": 1, 00:27:59.168 "can_share": true 00:27:59.168 } 00:27:59.168 } 00:27:59.168 ], 00:27:59.168 "mp_policy": "active_passive" 00:27:59.168 } 00:27:59.168 } 00:27:59.168 ] 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.OSpdZREnpY 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.OSpdZREnpY 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:59.168 [2024-07-13 08:14:50.879826] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:59.168 [2024-07-13 08:14:50.879994] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OSpdZREnpY 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:59.168 [2024-07-13 08:14:50.887844] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:59.168 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.169 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.OSpdZREnpY 00:27:59.169 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.169 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:59.169 [2024-07-13 08:14:50.895887] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:59.169 [2024-07-13 08:14:50.895964] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:27:59.428 nvme0n1 00:27:59.428 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.428 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:27:59.428 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.428 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:59.428 [ 00:27:59.428 { 00:27:59.428 "name": "nvme0n1", 00:27:59.428 "aliases": [ 00:27:59.428 "a195c4db-b83e-4dc1-befc-a7b3cb97b163" 00:27:59.428 ], 00:27:59.428 "product_name": "NVMe disk", 00:27:59.428 "block_size": 512, 00:27:59.428 "num_blocks": 2097152, 00:27:59.428 "uuid": "a195c4db-b83e-4dc1-befc-a7b3cb97b163", 00:27:59.428 "assigned_rate_limits": { 00:27:59.428 "rw_ios_per_sec": 0, 00:27:59.428 "rw_mbytes_per_sec": 0, 00:27:59.428 "r_mbytes_per_sec": 0, 00:27:59.428 "w_mbytes_per_sec": 0 00:27:59.428 }, 00:27:59.428 "claimed": false, 00:27:59.428 "zoned": false, 00:27:59.428 "supported_io_types": { 00:27:59.428 "read": true, 00:27:59.428 "write": true, 00:27:59.428 "unmap": false, 00:27:59.428 "flush": true, 00:27:59.428 "reset": true, 00:27:59.428 "nvme_admin": true, 00:27:59.428 "nvme_io": true, 00:27:59.428 "nvme_io_md": false, 00:27:59.428 "write_zeroes": true, 00:27:59.428 "zcopy": false, 00:27:59.428 "get_zone_info": false, 00:27:59.428 "zone_management": false, 00:27:59.428 "zone_append": false, 00:27:59.428 "compare": true, 00:27:59.428 "compare_and_write": true, 00:27:59.428 "abort": true, 00:27:59.428 "seek_hole": false, 00:27:59.428 "seek_data": false, 00:27:59.428 "copy": true, 00:27:59.428 "nvme_iov_md": false 00:27:59.428 }, 00:27:59.428 "memory_domains": [ 00:27:59.428 { 00:27:59.428 "dma_device_id": "system", 00:27:59.428 "dma_device_type": 1 00:27:59.428 } 00:27:59.428 ], 00:27:59.428 "driver_specific": { 00:27:59.428 "nvme": [ 00:27:59.428 { 00:27:59.428 "trid": { 00:27:59.428 "trtype": "TCP", 00:27:59.428 "adrfam": "IPv4", 00:27:59.428 "traddr": "10.0.0.2", 00:27:59.428 "trsvcid": "4421", 00:27:59.428 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:27:59.428 }, 00:27:59.428 "ctrlr_data": { 00:27:59.428 "cntlid": 3, 00:27:59.428 "vendor_id": "0x8086", 00:27:59.428 "model_number": "SPDK bdev Controller", 00:27:59.428 "serial_number": "00000000000000000000", 00:27:59.428 "firmware_revision": "24.09", 00:27:59.429 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:59.429 "oacs": { 00:27:59.429 "security": 0, 00:27:59.429 "format": 0, 00:27:59.429 "firmware": 0, 00:27:59.429 "ns_manage": 0 00:27:59.429 }, 00:27:59.429 "multi_ctrlr": true, 00:27:59.429 "ana_reporting": false 00:27:59.429 }, 00:27:59.429 "vs": { 00:27:59.429 "nvme_version": "1.3" 00:27:59.429 }, 00:27:59.429 "ns_data": { 00:27:59.429 "id": 1, 00:27:59.429 "can_share": true 00:27:59.429 } 00:27:59.429 } 00:27:59.429 ], 00:27:59.429 "mp_policy": "active_passive" 00:27:59.429 } 00:27:59.429 } 00:27:59.429 ] 00:27:59.429 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.429 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.429 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:59.429 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:27:59.429 08:14:50 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:59.429 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.OSpdZREnpY 00:27:59.429 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:27:59.429 08:14:50 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:27:59.429 08:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:59.429 08:14:50 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:27:59.429 08:14:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:59.429 08:14:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:27:59.429 08:14:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:59.429 08:14:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:59.429 rmmod nvme_tcp 00:27:59.429 rmmod nvme_fabrics 00:27:59.429 rmmod nvme_keyring 00:27:59.429 08:14:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:59.429 08:14:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:27:59.429 08:14:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:27:59.429 08:14:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2049337 ']' 00:27:59.429 08:14:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2049337 00:27:59.429 08:14:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 2049337 ']' 00:27:59.429 08:14:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 2049337 00:27:59.429 08:14:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:27:59.429 08:14:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:59.429 08:14:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2049337 00:27:59.429 08:14:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:59.429 08:14:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:59.429 08:14:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2049337' 00:27:59.429 killing process with pid 2049337 00:27:59.429 08:14:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 2049337 00:27:59.429 [2024-07-13 08:14:51.088905] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:27:59.429 [2024-07-13 08:14:51.088943] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:59.429 08:14:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 2049337 00:27:59.689 08:14:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:59.689 08:14:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:59.689 08:14:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:59.689 08:14:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:59.689 08:14:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:59.689 08:14:51 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:59.689 08:14:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:59.689 08:14:51 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.240 08:14:53 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:02.240 00:28:02.240 real 0m5.376s 00:28:02.240 user 0m1.996s 00:28:02.240 sys 0m1.760s 00:28:02.240 08:14:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:02.240 08:14:53 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:02.240 ************************************ 00:28:02.240 END TEST nvmf_async_init 00:28:02.240 ************************************ 00:28:02.240 08:14:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:02.240 08:14:53 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:02.240 08:14:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:02.240 08:14:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:02.240 08:14:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:02.240 ************************************ 00:28:02.240 START TEST dma 00:28:02.240 ************************************ 00:28:02.240 08:14:53 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:02.240 * Looking for test storage... 00:28:02.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:02.240 08:14:53 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:02.240 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:02.240 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:02.240 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:02.240 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:02.240 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:02.240 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:02.240 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:02.240 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:02.240 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:02.240 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:02.240 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:02.240 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:02.240 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:02.240 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:02.240 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:02.241 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:02.241 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:02.241 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:02.241 08:14:53 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:02.241 08:14:53 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.241 08:14:53 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.241 08:14:53 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.241 08:14:53 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.241 08:14:53 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.241 08:14:53 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:02.241 08:14:53 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.241 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:02.241 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:02.241 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:02.241 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:02.241 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:02.241 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:02.241 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:02.241 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:02.241 08:14:53 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:02.241 08:14:53 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:02.241 08:14:53 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:02.241 00:28:02.241 real 0m0.066s 00:28:02.241 user 0m0.028s 00:28:02.241 sys 0m0.042s 00:28:02.241 08:14:53 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:02.241 08:14:53 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:02.241 ************************************ 00:28:02.241 END TEST dma 00:28:02.241 ************************************ 00:28:02.241 08:14:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:02.242 08:14:53 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:02.242 08:14:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:02.242 08:14:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:02.242 08:14:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:02.242 ************************************ 00:28:02.242 START TEST nvmf_identify 00:28:02.242 ************************************ 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:02.242 * Looking for test storage... 00:28:02.242 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.242 08:14:53 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:02.243 08:14:53 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:04.156 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:04.156 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:04.156 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:04.156 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:04.156 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:04.157 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:04.157 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:28:04.157 00:28:04.157 --- 10.0.0.2 ping statistics --- 00:28:04.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.157 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:04.157 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:04.157 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:28:04.157 00:28:04.157 --- 10.0.0.1 ping statistics --- 00:28:04.157 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:04.157 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2051412 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2051412 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 2051412 ']' 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:04.157 [2024-07-13 08:14:55.587141] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:04.157 [2024-07-13 08:14:55.587240] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.157 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.157 [2024-07-13 08:14:55.657532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:04.157 [2024-07-13 08:14:55.749860] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:04.157 [2024-07-13 08:14:55.749933] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:04.157 [2024-07-13 08:14:55.749951] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:04.157 [2024-07-13 08:14:55.749964] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:04.157 [2024-07-13 08:14:55.749976] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:04.157 [2024-07-13 08:14:55.750044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.157 [2024-07-13 08:14:55.750097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.157 [2024-07-13 08:14:55.750161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:04.157 [2024-07-13 08:14:55.750163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:04.157 [2024-07-13 08:14:55.867450] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:04.157 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:04.419 08:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:04.419 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.419 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:04.419 Malloc0 00:28:04.419 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.419 08:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:04.419 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.419 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:04.419 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.419 08:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:04.419 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.419 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:04.419 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.419 08:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:04.419 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.419 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:04.419 [2024-07-13 08:14:55.938385] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:04.419 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.419 08:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:04.420 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.420 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:04.420 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.420 08:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:04.420 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.420 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:04.420 [ 00:28:04.420 { 00:28:04.420 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:04.420 "subtype": "Discovery", 00:28:04.420 "listen_addresses": [ 00:28:04.420 { 00:28:04.420 "trtype": "TCP", 00:28:04.420 "adrfam": "IPv4", 00:28:04.420 "traddr": "10.0.0.2", 00:28:04.420 "trsvcid": "4420" 00:28:04.420 } 00:28:04.420 ], 00:28:04.420 "allow_any_host": true, 00:28:04.420 "hosts": [] 00:28:04.420 }, 00:28:04.420 { 00:28:04.420 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:04.420 "subtype": "NVMe", 00:28:04.420 "listen_addresses": [ 00:28:04.420 { 00:28:04.420 "trtype": "TCP", 00:28:04.420 "adrfam": "IPv4", 00:28:04.420 "traddr": "10.0.0.2", 00:28:04.420 "trsvcid": "4420" 00:28:04.420 } 00:28:04.420 ], 00:28:04.420 "allow_any_host": true, 00:28:04.420 "hosts": [], 00:28:04.420 "serial_number": "SPDK00000000000001", 00:28:04.420 "model_number": "SPDK bdev Controller", 00:28:04.420 "max_namespaces": 32, 00:28:04.420 "min_cntlid": 1, 00:28:04.420 "max_cntlid": 65519, 00:28:04.420 "namespaces": [ 00:28:04.420 { 00:28:04.420 "nsid": 1, 00:28:04.420 "bdev_name": "Malloc0", 00:28:04.420 "name": "Malloc0", 00:28:04.420 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:04.420 "eui64": "ABCDEF0123456789", 00:28:04.420 "uuid": "bca4e04a-0257-4388-be37-ea86a0caedba" 00:28:04.420 } 00:28:04.420 ] 00:28:04.420 } 00:28:04.420 ] 00:28:04.420 08:14:55 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.420 08:14:55 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:04.420 [2024-07-13 08:14:55.975580] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:04.420 [2024-07-13 08:14:55.975618] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2051478 ] 00:28:04.420 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.420 [2024-07-13 08:14:56.007092] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:04.420 [2024-07-13 08:14:56.007188] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:04.420 [2024-07-13 08:14:56.007198] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:04.420 [2024-07-13 08:14:56.007214] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:04.420 [2024-07-13 08:14:56.007224] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:04.420 [2024-07-13 08:14:56.010912] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:04.420 [2024-07-13 08:14:56.010985] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x18efae0 0 00:28:04.420 [2024-07-13 08:14:56.018881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:04.420 [2024-07-13 08:14:56.018903] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:04.420 [2024-07-13 08:14:56.018912] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:04.420 [2024-07-13 08:14:56.018919] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:04.420 [2024-07-13 08:14:56.018975] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.420 [2024-07-13 08:14:56.018989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.420 [2024-07-13 08:14:56.018997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18efae0) 00:28:04.420 [2024-07-13 08:14:56.019015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:04.420 [2024-07-13 08:14:56.019041] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946240, cid 0, qid 0 00:28:04.420 [2024-07-13 08:14:56.026895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.420 [2024-07-13 08:14:56.026913] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.420 [2024-07-13 08:14:56.026920] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.420 [2024-07-13 08:14:56.026928] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946240) on tqpair=0x18efae0 00:28:04.420 [2024-07-13 08:14:56.026949] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:04.420 [2024-07-13 08:14:56.026961] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:04.420 [2024-07-13 08:14:56.026970] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:04.420 [2024-07-13 08:14:56.026993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.420 [2024-07-13 08:14:56.027002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.420 [2024-07-13 08:14:56.027008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18efae0) 00:28:04.420 [2024-07-13 08:14:56.027019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.420 [2024-07-13 08:14:56.027043] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946240, cid 0, qid 0 00:28:04.420 [2024-07-13 08:14:56.027223] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.420 [2024-07-13 08:14:56.027235] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.420 [2024-07-13 08:14:56.027242] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.420 [2024-07-13 08:14:56.027249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946240) on tqpair=0x18efae0 00:28:04.420 [2024-07-13 08:14:56.027258] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:04.420 [2024-07-13 08:14:56.027271] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:04.420 [2024-07-13 08:14:56.027283] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.420 [2024-07-13 08:14:56.027291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.420 [2024-07-13 08:14:56.027298] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18efae0) 00:28:04.420 [2024-07-13 08:14:56.027308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.420 [2024-07-13 08:14:56.027330] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946240, cid 0, qid 0 00:28:04.420 [2024-07-13 08:14:56.027492] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.420 [2024-07-13 08:14:56.027504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.420 [2024-07-13 08:14:56.027510] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.420 [2024-07-13 08:14:56.027517] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946240) on tqpair=0x18efae0 00:28:04.420 [2024-07-13 08:14:56.027526] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:04.420 [2024-07-13 08:14:56.027540] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:04.420 [2024-07-13 08:14:56.027552] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.420 [2024-07-13 08:14:56.027560] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.420 [2024-07-13 08:14:56.027566] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18efae0) 00:28:04.420 [2024-07-13 08:14:56.027577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.420 [2024-07-13 08:14:56.027598] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946240, cid 0, qid 0 00:28:04.420 [2024-07-13 08:14:56.027830] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.420 [2024-07-13 08:14:56.027846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.420 [2024-07-13 08:14:56.027853] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.420 [2024-07-13 08:14:56.027860] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946240) on tqpair=0x18efae0 00:28:04.420 [2024-07-13 08:14:56.027876] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:04.420 [2024-07-13 08:14:56.027895] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.420 [2024-07-13 08:14:56.027904] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.420 [2024-07-13 08:14:56.027911] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18efae0) 00:28:04.420 [2024-07-13 08:14:56.027921] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.420 [2024-07-13 08:14:56.027943] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946240, cid 0, qid 0 00:28:04.420 [2024-07-13 08:14:56.028060] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.420 [2024-07-13 08:14:56.028075] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.420 [2024-07-13 08:14:56.028082] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.420 [2024-07-13 08:14:56.028089] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946240) on tqpair=0x18efae0 00:28:04.420 [2024-07-13 08:14:56.028098] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:04.420 [2024-07-13 08:14:56.028106] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:04.420 [2024-07-13 08:14:56.028119] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:04.420 [2024-07-13 08:14:56.028231] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:04.420 [2024-07-13 08:14:56.028239] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:04.420 [2024-07-13 08:14:56.028253] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.420 [2024-07-13 08:14:56.028260] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.420 [2024-07-13 08:14:56.028266] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18efae0) 00:28:04.420 [2024-07-13 08:14:56.028276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.420 [2024-07-13 08:14:56.028296] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946240, cid 0, qid 0 00:28:04.420 [2024-07-13 08:14:56.028474] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.420 [2024-07-13 08:14:56.028487] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.420 [2024-07-13 08:14:56.028494] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.420 [2024-07-13 08:14:56.028501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946240) on tqpair=0x18efae0 00:28:04.420 [2024-07-13 08:14:56.028509] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:04.421 [2024-07-13 08:14:56.028525] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.028534] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.028541] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18efae0) 00:28:04.421 [2024-07-13 08:14:56.028552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.421 [2024-07-13 08:14:56.028577] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946240, cid 0, qid 0 00:28:04.421 [2024-07-13 08:14:56.028746] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.421 [2024-07-13 08:14:56.028761] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.421 [2024-07-13 08:14:56.028768] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.028775] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946240) on tqpair=0x18efae0 00:28:04.421 [2024-07-13 08:14:56.028782] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:04.421 [2024-07-13 08:14:56.028791] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:04.421 [2024-07-13 08:14:56.028805] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:04.421 [2024-07-13 08:14:56.028825] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:04.421 [2024-07-13 08:14:56.028841] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.028849] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18efae0) 00:28:04.421 [2024-07-13 08:14:56.028860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.421 [2024-07-13 08:14:56.028917] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946240, cid 0, qid 0 00:28:04.421 [2024-07-13 08:14:56.029110] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.421 [2024-07-13 08:14:56.029126] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.421 [2024-07-13 08:14:56.029133] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.029140] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18efae0): datao=0, datal=4096, cccid=0 00:28:04.421 [2024-07-13 08:14:56.029148] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1946240) on tqpair(0x18efae0): expected_datao=0, payload_size=4096 00:28:04.421 [2024-07-13 08:14:56.029156] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.029199] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.029209] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.029316] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.421 [2024-07-13 08:14:56.029327] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.421 [2024-07-13 08:14:56.029333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.029340] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946240) on tqpair=0x18efae0 00:28:04.421 [2024-07-13 08:14:56.029353] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:04.421 [2024-07-13 08:14:56.029366] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:04.421 [2024-07-13 08:14:56.029374] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:04.421 [2024-07-13 08:14:56.029383] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:04.421 [2024-07-13 08:14:56.029391] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:04.421 [2024-07-13 08:14:56.029399] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:04.421 [2024-07-13 08:14:56.029414] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:04.421 [2024-07-13 08:14:56.029430] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.029439] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.029445] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18efae0) 00:28:04.421 [2024-07-13 08:14:56.029456] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:04.421 [2024-07-13 08:14:56.029491] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946240, cid 0, qid 0 00:28:04.421 [2024-07-13 08:14:56.029699] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.421 [2024-07-13 08:14:56.029715] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.421 [2024-07-13 08:14:56.029722] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.029729] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946240) on tqpair=0x18efae0 00:28:04.421 [2024-07-13 08:14:56.029741] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.029749] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.029755] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x18efae0) 00:28:04.421 [2024-07-13 08:14:56.029765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.421 [2024-07-13 08:14:56.029776] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.029783] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.029789] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x18efae0) 00:28:04.421 [2024-07-13 08:14:56.029798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.421 [2024-07-13 08:14:56.029808] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.029814] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.029821] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x18efae0) 00:28:04.421 [2024-07-13 08:14:56.029830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.421 [2024-07-13 08:14:56.029839] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.029861] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.029874] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18efae0) 00:28:04.421 [2024-07-13 08:14:56.029884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.421 [2024-07-13 08:14:56.029893] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:04.421 [2024-07-13 08:14:56.029913] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:04.421 [2024-07-13 08:14:56.029925] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.029932] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18efae0) 00:28:04.421 [2024-07-13 08:14:56.029942] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.421 [2024-07-13 08:14:56.029964] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946240, cid 0, qid 0 00:28:04.421 [2024-07-13 08:14:56.029991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19463c0, cid 1, qid 0 00:28:04.421 [2024-07-13 08:14:56.029999] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946540, cid 2, qid 0 00:28:04.421 [2024-07-13 08:14:56.030010] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19466c0, cid 3, qid 0 00:28:04.421 [2024-07-13 08:14:56.030019] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946840, cid 4, qid 0 00:28:04.421 [2024-07-13 08:14:56.030184] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.421 [2024-07-13 08:14:56.030199] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.421 [2024-07-13 08:14:56.030206] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.030213] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946840) on tqpair=0x18efae0 00:28:04.421 [2024-07-13 08:14:56.030223] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:04.421 [2024-07-13 08:14:56.030232] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:04.421 [2024-07-13 08:14:56.030249] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.030258] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18efae0) 00:28:04.421 [2024-07-13 08:14:56.030269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.421 [2024-07-13 08:14:56.030306] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946840, cid 4, qid 0 00:28:04.421 [2024-07-13 08:14:56.030497] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.421 [2024-07-13 08:14:56.030510] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.421 [2024-07-13 08:14:56.030517] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.030524] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18efae0): datao=0, datal=4096, cccid=4 00:28:04.421 [2024-07-13 08:14:56.030531] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1946840) on tqpair(0x18efae0): expected_datao=0, payload_size=4096 00:28:04.421 [2024-07-13 08:14:56.030539] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.030549] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.030556] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.030575] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.421 [2024-07-13 08:14:56.030585] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.421 [2024-07-13 08:14:56.030607] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.030614] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946840) on tqpair=0x18efae0 00:28:04.421 [2024-07-13 08:14:56.030632] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:04.421 [2024-07-13 08:14:56.030669] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.030679] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18efae0) 00:28:04.421 [2024-07-13 08:14:56.030689] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.421 [2024-07-13 08:14:56.030701] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.030708] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.421 [2024-07-13 08:14:56.030714] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x18efae0) 00:28:04.421 [2024-07-13 08:14:56.030723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.421 [2024-07-13 08:14:56.030749] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946840, cid 4, qid 0 00:28:04.421 [2024-07-13 08:14:56.030776] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19469c0, cid 5, qid 0 00:28:04.421 [2024-07-13 08:14:56.034883] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.421 [2024-07-13 08:14:56.034899] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.421 [2024-07-13 08:14:56.034906] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.422 [2024-07-13 08:14:56.034913] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18efae0): datao=0, datal=1024, cccid=4 00:28:04.422 [2024-07-13 08:14:56.034921] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1946840) on tqpair(0x18efae0): expected_datao=0, payload_size=1024 00:28:04.422 [2024-07-13 08:14:56.034928] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.422 [2024-07-13 08:14:56.034937] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.422 [2024-07-13 08:14:56.034944] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.422 [2024-07-13 08:14:56.034952] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.422 [2024-07-13 08:14:56.034961] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.422 [2024-07-13 08:14:56.034967] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.422 [2024-07-13 08:14:56.034974] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19469c0) on tqpair=0x18efae0 00:28:04.422 [2024-07-13 08:14:56.074875] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.422 [2024-07-13 08:14:56.074894] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.422 [2024-07-13 08:14:56.074901] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.422 [2024-07-13 08:14:56.074908] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946840) on tqpair=0x18efae0 00:28:04.422 [2024-07-13 08:14:56.074926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.422 [2024-07-13 08:14:56.074934] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18efae0) 00:28:04.422 [2024-07-13 08:14:56.074945] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.422 [2024-07-13 08:14:56.074975] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946840, cid 4, qid 0 00:28:04.422 [2024-07-13 08:14:56.075155] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.422 [2024-07-13 08:14:56.075168] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.422 [2024-07-13 08:14:56.075175] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.422 [2024-07-13 08:14:56.075181] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18efae0): datao=0, datal=3072, cccid=4 00:28:04.422 [2024-07-13 08:14:56.075189] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1946840) on tqpair(0x18efae0): expected_datao=0, payload_size=3072 00:28:04.422 [2024-07-13 08:14:56.075197] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.422 [2024-07-13 08:14:56.075207] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.422 [2024-07-13 08:14:56.075214] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.422 [2024-07-13 08:14:56.075233] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.422 [2024-07-13 08:14:56.075244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.422 [2024-07-13 08:14:56.075250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.422 [2024-07-13 08:14:56.075257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946840) on tqpair=0x18efae0 00:28:04.422 [2024-07-13 08:14:56.075271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.422 [2024-07-13 08:14:56.075294] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x18efae0) 00:28:04.422 [2024-07-13 08:14:56.075305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.422 [2024-07-13 08:14:56.075332] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1946840, cid 4, qid 0 00:28:04.422 [2024-07-13 08:14:56.075484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.422 [2024-07-13 08:14:56.075504] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.422 [2024-07-13 08:14:56.075512] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.422 [2024-07-13 08:14:56.075519] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x18efae0): datao=0, datal=8, cccid=4 00:28:04.422 [2024-07-13 08:14:56.075526] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1946840) on tqpair(0x18efae0): expected_datao=0, payload_size=8 00:28:04.422 [2024-07-13 08:14:56.075534] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.422 [2024-07-13 08:14:56.075544] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.422 [2024-07-13 08:14:56.075550] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.422 [2024-07-13 08:14:56.116031] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.422 [2024-07-13 08:14:56.116050] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.422 [2024-07-13 08:14:56.116057] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.422 [2024-07-13 08:14:56.116064] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946840) on tqpair=0x18efae0 00:28:04.422 ===================================================== 00:28:04.422 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:04.422 ===================================================== 00:28:04.422 Controller Capabilities/Features 00:28:04.422 ================================ 00:28:04.422 Vendor ID: 0000 00:28:04.422 Subsystem Vendor ID: 0000 00:28:04.422 Serial Number: .................... 00:28:04.422 Model Number: ........................................ 00:28:04.422 Firmware Version: 24.09 00:28:04.422 Recommended Arb Burst: 0 00:28:04.422 IEEE OUI Identifier: 00 00 00 00:28:04.422 Multi-path I/O 00:28:04.422 May have multiple subsystem ports: No 00:28:04.422 May have multiple controllers: No 00:28:04.422 Associated with SR-IOV VF: No 00:28:04.422 Max Data Transfer Size: 131072 00:28:04.422 Max Number of Namespaces: 0 00:28:04.422 Max Number of I/O Queues: 1024 00:28:04.422 NVMe Specification Version (VS): 1.3 00:28:04.422 NVMe Specification Version (Identify): 1.3 00:28:04.422 Maximum Queue Entries: 128 00:28:04.422 Contiguous Queues Required: Yes 00:28:04.422 Arbitration Mechanisms Supported 00:28:04.422 Weighted Round Robin: Not Supported 00:28:04.422 Vendor Specific: Not Supported 00:28:04.422 Reset Timeout: 15000 ms 00:28:04.422 Doorbell Stride: 4 bytes 00:28:04.422 NVM Subsystem Reset: Not Supported 00:28:04.422 Command Sets Supported 00:28:04.422 NVM Command Set: Supported 00:28:04.422 Boot Partition: Not Supported 00:28:04.422 Memory Page Size Minimum: 4096 bytes 00:28:04.422 Memory Page Size Maximum: 4096 bytes 00:28:04.422 Persistent Memory Region: Not Supported 00:28:04.422 Optional Asynchronous Events Supported 00:28:04.422 Namespace Attribute Notices: Not Supported 00:28:04.422 Firmware Activation Notices: Not Supported 00:28:04.422 ANA Change Notices: Not Supported 00:28:04.422 PLE Aggregate Log Change Notices: Not Supported 00:28:04.422 LBA Status Info Alert Notices: Not Supported 00:28:04.422 EGE Aggregate Log Change Notices: Not Supported 00:28:04.422 Normal NVM Subsystem Shutdown event: Not Supported 00:28:04.422 Zone Descriptor Change Notices: Not Supported 00:28:04.422 Discovery Log Change Notices: Supported 00:28:04.422 Controller Attributes 00:28:04.422 128-bit Host Identifier: Not Supported 00:28:04.422 Non-Operational Permissive Mode: Not Supported 00:28:04.422 NVM Sets: Not Supported 00:28:04.422 Read Recovery Levels: Not Supported 00:28:04.422 Endurance Groups: Not Supported 00:28:04.422 Predictable Latency Mode: Not Supported 00:28:04.422 Traffic Based Keep ALive: Not Supported 00:28:04.422 Namespace Granularity: Not Supported 00:28:04.422 SQ Associations: Not Supported 00:28:04.422 UUID List: Not Supported 00:28:04.422 Multi-Domain Subsystem: Not Supported 00:28:04.422 Fixed Capacity Management: Not Supported 00:28:04.422 Variable Capacity Management: Not Supported 00:28:04.422 Delete Endurance Group: Not Supported 00:28:04.422 Delete NVM Set: Not Supported 00:28:04.422 Extended LBA Formats Supported: Not Supported 00:28:04.422 Flexible Data Placement Supported: Not Supported 00:28:04.422 00:28:04.422 Controller Memory Buffer Support 00:28:04.422 ================================ 00:28:04.422 Supported: No 00:28:04.422 00:28:04.422 Persistent Memory Region Support 00:28:04.422 ================================ 00:28:04.422 Supported: No 00:28:04.422 00:28:04.422 Admin Command Set Attributes 00:28:04.422 ============================ 00:28:04.422 Security Send/Receive: Not Supported 00:28:04.422 Format NVM: Not Supported 00:28:04.422 Firmware Activate/Download: Not Supported 00:28:04.422 Namespace Management: Not Supported 00:28:04.422 Device Self-Test: Not Supported 00:28:04.422 Directives: Not Supported 00:28:04.422 NVMe-MI: Not Supported 00:28:04.422 Virtualization Management: Not Supported 00:28:04.422 Doorbell Buffer Config: Not Supported 00:28:04.422 Get LBA Status Capability: Not Supported 00:28:04.422 Command & Feature Lockdown Capability: Not Supported 00:28:04.422 Abort Command Limit: 1 00:28:04.422 Async Event Request Limit: 4 00:28:04.422 Number of Firmware Slots: N/A 00:28:04.422 Firmware Slot 1 Read-Only: N/A 00:28:04.422 Firmware Activation Without Reset: N/A 00:28:04.422 Multiple Update Detection Support: N/A 00:28:04.422 Firmware Update Granularity: No Information Provided 00:28:04.422 Per-Namespace SMART Log: No 00:28:04.422 Asymmetric Namespace Access Log Page: Not Supported 00:28:04.422 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:04.422 Command Effects Log Page: Not Supported 00:28:04.422 Get Log Page Extended Data: Supported 00:28:04.422 Telemetry Log Pages: Not Supported 00:28:04.422 Persistent Event Log Pages: Not Supported 00:28:04.422 Supported Log Pages Log Page: May Support 00:28:04.422 Commands Supported & Effects Log Page: Not Supported 00:28:04.422 Feature Identifiers & Effects Log Page:May Support 00:28:04.422 NVMe-MI Commands & Effects Log Page: May Support 00:28:04.422 Data Area 4 for Telemetry Log: Not Supported 00:28:04.422 Error Log Page Entries Supported: 128 00:28:04.422 Keep Alive: Not Supported 00:28:04.422 00:28:04.422 NVM Command Set Attributes 00:28:04.422 ========================== 00:28:04.422 Submission Queue Entry Size 00:28:04.422 Max: 1 00:28:04.422 Min: 1 00:28:04.422 Completion Queue Entry Size 00:28:04.422 Max: 1 00:28:04.422 Min: 1 00:28:04.422 Number of Namespaces: 0 00:28:04.422 Compare Command: Not Supported 00:28:04.422 Write Uncorrectable Command: Not Supported 00:28:04.422 Dataset Management Command: Not Supported 00:28:04.422 Write Zeroes Command: Not Supported 00:28:04.422 Set Features Save Field: Not Supported 00:28:04.422 Reservations: Not Supported 00:28:04.423 Timestamp: Not Supported 00:28:04.423 Copy: Not Supported 00:28:04.423 Volatile Write Cache: Not Present 00:28:04.423 Atomic Write Unit (Normal): 1 00:28:04.423 Atomic Write Unit (PFail): 1 00:28:04.423 Atomic Compare & Write Unit: 1 00:28:04.423 Fused Compare & Write: Supported 00:28:04.423 Scatter-Gather List 00:28:04.423 SGL Command Set: Supported 00:28:04.423 SGL Keyed: Supported 00:28:04.423 SGL Bit Bucket Descriptor: Not Supported 00:28:04.423 SGL Metadata Pointer: Not Supported 00:28:04.423 Oversized SGL: Not Supported 00:28:04.423 SGL Metadata Address: Not Supported 00:28:04.423 SGL Offset: Supported 00:28:04.423 Transport SGL Data Block: Not Supported 00:28:04.423 Replay Protected Memory Block: Not Supported 00:28:04.423 00:28:04.423 Firmware Slot Information 00:28:04.423 ========================= 00:28:04.423 Active slot: 0 00:28:04.423 00:28:04.423 00:28:04.423 Error Log 00:28:04.423 ========= 00:28:04.423 00:28:04.423 Active Namespaces 00:28:04.423 ================= 00:28:04.423 Discovery Log Page 00:28:04.423 ================== 00:28:04.423 Generation Counter: 2 00:28:04.423 Number of Records: 2 00:28:04.423 Record Format: 0 00:28:04.423 00:28:04.423 Discovery Log Entry 0 00:28:04.423 ---------------------- 00:28:04.423 Transport Type: 3 (TCP) 00:28:04.423 Address Family: 1 (IPv4) 00:28:04.423 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:04.423 Entry Flags: 00:28:04.423 Duplicate Returned Information: 1 00:28:04.423 Explicit Persistent Connection Support for Discovery: 1 00:28:04.423 Transport Requirements: 00:28:04.423 Secure Channel: Not Required 00:28:04.423 Port ID: 0 (0x0000) 00:28:04.423 Controller ID: 65535 (0xffff) 00:28:04.423 Admin Max SQ Size: 128 00:28:04.423 Transport Service Identifier: 4420 00:28:04.423 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:04.423 Transport Address: 10.0.0.2 00:28:04.423 Discovery Log Entry 1 00:28:04.423 ---------------------- 00:28:04.423 Transport Type: 3 (TCP) 00:28:04.423 Address Family: 1 (IPv4) 00:28:04.423 Subsystem Type: 2 (NVM Subsystem) 00:28:04.423 Entry Flags: 00:28:04.423 Duplicate Returned Information: 0 00:28:04.423 Explicit Persistent Connection Support for Discovery: 0 00:28:04.423 Transport Requirements: 00:28:04.423 Secure Channel: Not Required 00:28:04.423 Port ID: 0 (0x0000) 00:28:04.423 Controller ID: 65535 (0xffff) 00:28:04.423 Admin Max SQ Size: 128 00:28:04.423 Transport Service Identifier: 4420 00:28:04.423 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:04.423 Transport Address: 10.0.0.2 [2024-07-13 08:14:56.116184] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:04.423 [2024-07-13 08:14:56.116211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946240) on tqpair=0x18efae0 00:28:04.423 [2024-07-13 08:14:56.116224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.423 [2024-07-13 08:14:56.116233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19463c0) on tqpair=0x18efae0 00:28:04.423 [2024-07-13 08:14:56.116241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.423 [2024-07-13 08:14:56.116249] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1946540) on tqpair=0x18efae0 00:28:04.423 [2024-07-13 08:14:56.116257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.423 [2024-07-13 08:14:56.116265] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19466c0) on tqpair=0x18efae0 00:28:04.423 [2024-07-13 08:14:56.116273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.423 [2024-07-13 08:14:56.116291] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.116300] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.116323] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18efae0) 00:28:04.423 [2024-07-13 08:14:56.116334] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.423 [2024-07-13 08:14:56.116358] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19466c0, cid 3, qid 0 00:28:04.423 [2024-07-13 08:14:56.116516] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.423 [2024-07-13 08:14:56.116533] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.423 [2024-07-13 08:14:56.116539] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.116547] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19466c0) on tqpair=0x18efae0 00:28:04.423 [2024-07-13 08:14:56.116559] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.116567] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.116573] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18efae0) 00:28:04.423 [2024-07-13 08:14:56.116584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.423 [2024-07-13 08:14:56.116610] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19466c0, cid 3, qid 0 00:28:04.423 [2024-07-13 08:14:56.116741] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.423 [2024-07-13 08:14:56.116757] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.423 [2024-07-13 08:14:56.116765] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.116772] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19466c0) on tqpair=0x18efae0 00:28:04.423 [2024-07-13 08:14:56.116781] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:04.423 [2024-07-13 08:14:56.116791] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:04.423 [2024-07-13 08:14:56.116806] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.116815] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.116822] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18efae0) 00:28:04.423 [2024-07-13 08:14:56.116832] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.423 [2024-07-13 08:14:56.116853] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19466c0, cid 3, qid 0 00:28:04.423 [2024-07-13 08:14:56.116985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.423 [2024-07-13 08:14:56.117000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.423 [2024-07-13 08:14:56.117007] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.117014] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19466c0) on tqpair=0x18efae0 00:28:04.423 [2024-07-13 08:14:56.117031] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.117040] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.117047] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18efae0) 00:28:04.423 [2024-07-13 08:14:56.117057] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.423 [2024-07-13 08:14:56.117078] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19466c0, cid 3, qid 0 00:28:04.423 [2024-07-13 08:14:56.117185] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.423 [2024-07-13 08:14:56.117197] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.423 [2024-07-13 08:14:56.117204] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.117211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19466c0) on tqpair=0x18efae0 00:28:04.423 [2024-07-13 08:14:56.117226] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.117235] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.117242] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18efae0) 00:28:04.423 [2024-07-13 08:14:56.117252] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.423 [2024-07-13 08:14:56.117273] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19466c0, cid 3, qid 0 00:28:04.423 [2024-07-13 08:14:56.117386] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.423 [2024-07-13 08:14:56.117401] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.423 [2024-07-13 08:14:56.117408] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.117415] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19466c0) on tqpair=0x18efae0 00:28:04.423 [2024-07-13 08:14:56.117431] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.117440] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.117447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18efae0) 00:28:04.423 [2024-07-13 08:14:56.117457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.423 [2024-07-13 08:14:56.117482] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19466c0, cid 3, qid 0 00:28:04.423 [2024-07-13 08:14:56.117595] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.423 [2024-07-13 08:14:56.117607] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.423 [2024-07-13 08:14:56.117614] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.117621] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19466c0) on tqpair=0x18efae0 00:28:04.423 [2024-07-13 08:14:56.117636] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.117645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.117652] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18efae0) 00:28:04.423 [2024-07-13 08:14:56.117662] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.423 [2024-07-13 08:14:56.117683] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19466c0, cid 3, qid 0 00:28:04.423 [2024-07-13 08:14:56.117793] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.423 [2024-07-13 08:14:56.117808] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.423 [2024-07-13 08:14:56.117815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.117822] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19466c0) on tqpair=0x18efae0 00:28:04.423 [2024-07-13 08:14:56.117838] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.117847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.423 [2024-07-13 08:14:56.117854] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18efae0) 00:28:04.424 [2024-07-13 08:14:56.117870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.424 [2024-07-13 08:14:56.117893] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19466c0, cid 3, qid 0 00:28:04.424 [2024-07-13 08:14:56.118022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.424 [2024-07-13 08:14:56.118035] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.424 [2024-07-13 08:14:56.118041] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.424 [2024-07-13 08:14:56.118048] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19466c0) on tqpair=0x18efae0 00:28:04.424 [2024-07-13 08:14:56.118064] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.424 [2024-07-13 08:14:56.118073] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.424 [2024-07-13 08:14:56.118080] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18efae0) 00:28:04.424 [2024-07-13 08:14:56.118090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.424 [2024-07-13 08:14:56.118111] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19466c0, cid 3, qid 0 00:28:04.424 [2024-07-13 08:14:56.118243] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.424 [2024-07-13 08:14:56.118258] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.424 [2024-07-13 08:14:56.118265] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.424 [2024-07-13 08:14:56.118272] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19466c0) on tqpair=0x18efae0 00:28:04.424 [2024-07-13 08:14:56.118288] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.424 [2024-07-13 08:14:56.118297] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.424 [2024-07-13 08:14:56.118304] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18efae0) 00:28:04.424 [2024-07-13 08:14:56.118314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.424 [2024-07-13 08:14:56.118335] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19466c0, cid 3, qid 0 00:28:04.424 [2024-07-13 08:14:56.118445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.424 [2024-07-13 08:14:56.118458] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.424 [2024-07-13 08:14:56.118465] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.424 [2024-07-13 08:14:56.118472] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19466c0) on tqpair=0x18efae0 00:28:04.424 [2024-07-13 08:14:56.118487] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.424 [2024-07-13 08:14:56.118497] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.424 [2024-07-13 08:14:56.118503] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18efae0) 00:28:04.424 [2024-07-13 08:14:56.118514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.424 [2024-07-13 08:14:56.118534] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19466c0, cid 3, qid 0 00:28:04.424 [2024-07-13 08:14:56.118642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.424 [2024-07-13 08:14:56.118654] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.424 [2024-07-13 08:14:56.118661] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.424 [2024-07-13 08:14:56.118668] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19466c0) on tqpair=0x18efae0 00:28:04.424 [2024-07-13 08:14:56.118683] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.424 [2024-07-13 08:14:56.118692] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.424 [2024-07-13 08:14:56.118699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18efae0) 00:28:04.424 [2024-07-13 08:14:56.118709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.424 [2024-07-13 08:14:56.118729] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19466c0, cid 3, qid 0 00:28:04.424 [2024-07-13 08:14:56.118837] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.424 [2024-07-13 08:14:56.118849] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.424 [2024-07-13 08:14:56.118856] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.424 [2024-07-13 08:14:56.118863] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19466c0) on tqpair=0x18efae0 00:28:04.424 [2024-07-13 08:14:56.122893] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.424 [2024-07-13 08:14:56.122904] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.424 [2024-07-13 08:14:56.122926] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x18efae0) 00:28:04.424 [2024-07-13 08:14:56.122937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.424 [2024-07-13 08:14:56.122960] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19466c0, cid 3, qid 0 00:28:04.424 [2024-07-13 08:14:56.123117] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.424 [2024-07-13 08:14:56.123129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.424 [2024-07-13 08:14:56.123136] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.424 [2024-07-13 08:14:56.123143] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19466c0) on tqpair=0x18efae0 00:28:04.424 [2024-07-13 08:14:56.123156] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:28:04.424 00:28:04.424 08:14:56 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:04.736 [2024-07-13 08:14:56.159033] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:04.736 [2024-07-13 08:14:56.159079] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2051488 ] 00:28:04.736 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.736 [2024-07-13 08:14:56.193636] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:04.736 [2024-07-13 08:14:56.193691] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:04.736 [2024-07-13 08:14:56.193701] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:04.736 [2024-07-13 08:14:56.193718] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:04.736 [2024-07-13 08:14:56.193727] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:04.736 [2024-07-13 08:14:56.193951] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:04.736 [2024-07-13 08:14:56.193991] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd25ae0 0 00:28:04.736 [2024-07-13 08:14:56.200881] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:04.736 [2024-07-13 08:14:56.200900] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:04.736 [2024-07-13 08:14:56.200908] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:04.736 [2024-07-13 08:14:56.200914] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:04.736 [2024-07-13 08:14:56.200952] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.736 [2024-07-13 08:14:56.200964] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.736 [2024-07-13 08:14:56.200971] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd25ae0) 00:28:04.736 [2024-07-13 08:14:56.200985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:04.736 [2024-07-13 08:14:56.201012] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c240, cid 0, qid 0 00:28:04.736 [2024-07-13 08:14:56.208879] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.736 [2024-07-13 08:14:56.208897] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.736 [2024-07-13 08:14:56.208912] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.736 [2024-07-13 08:14:56.208920] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c240) on tqpair=0xd25ae0 00:28:04.736 [2024-07-13 08:14:56.208934] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:04.736 [2024-07-13 08:14:56.208944] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:04.736 [2024-07-13 08:14:56.208953] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:04.736 [2024-07-13 08:14:56.208972] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.736 [2024-07-13 08:14:56.208981] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.736 [2024-07-13 08:14:56.208987] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd25ae0) 00:28:04.736 [2024-07-13 08:14:56.208998] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.736 [2024-07-13 08:14:56.209022] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c240, cid 0, qid 0 00:28:04.736 [2024-07-13 08:14:56.209171] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.736 [2024-07-13 08:14:56.209187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.736 [2024-07-13 08:14:56.209193] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.736 [2024-07-13 08:14:56.209200] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c240) on tqpair=0xd25ae0 00:28:04.736 [2024-07-13 08:14:56.209212] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:04.736 [2024-07-13 08:14:56.209227] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:04.736 [2024-07-13 08:14:56.209239] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.736 [2024-07-13 08:14:56.209247] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.736 [2024-07-13 08:14:56.209254] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd25ae0) 00:28:04.736 [2024-07-13 08:14:56.209264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.736 [2024-07-13 08:14:56.209286] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c240, cid 0, qid 0 00:28:04.736 [2024-07-13 08:14:56.209413] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.736 [2024-07-13 08:14:56.209426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.736 [2024-07-13 08:14:56.209432] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.736 [2024-07-13 08:14:56.209439] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c240) on tqpair=0xd25ae0 00:28:04.736 [2024-07-13 08:14:56.209447] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:04.736 [2024-07-13 08:14:56.209460] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:04.736 [2024-07-13 08:14:56.209472] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.736 [2024-07-13 08:14:56.209480] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.736 [2024-07-13 08:14:56.209486] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd25ae0) 00:28:04.736 [2024-07-13 08:14:56.209497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.736 [2024-07-13 08:14:56.209518] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c240, cid 0, qid 0 00:28:04.736 [2024-07-13 08:14:56.209628] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.736 [2024-07-13 08:14:56.209643] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.736 [2024-07-13 08:14:56.209650] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.736 [2024-07-13 08:14:56.209657] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c240) on tqpair=0xd25ae0 00:28:04.736 [2024-07-13 08:14:56.209665] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:04.736 [2024-07-13 08:14:56.209682] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.736 [2024-07-13 08:14:56.209691] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.736 [2024-07-13 08:14:56.209698] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd25ae0) 00:28:04.736 [2024-07-13 08:14:56.209708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.736 [2024-07-13 08:14:56.209729] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c240, cid 0, qid 0 00:28:04.736 [2024-07-13 08:14:56.209842] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.736 [2024-07-13 08:14:56.209857] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.736 [2024-07-13 08:14:56.209864] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.736 [2024-07-13 08:14:56.209881] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c240) on tqpair=0xd25ae0 00:28:04.736 [2024-07-13 08:14:56.209889] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:04.736 [2024-07-13 08:14:56.209897] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:04.736 [2024-07-13 08:14:56.209915] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:04.736 [2024-07-13 08:14:56.210026] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:04.736 [2024-07-13 08:14:56.210033] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:04.736 [2024-07-13 08:14:56.210045] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.210052] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.210059] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd25ae0) 00:28:04.737 [2024-07-13 08:14:56.210069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.737 [2024-07-13 08:14:56.210091] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c240, cid 0, qid 0 00:28:04.737 [2024-07-13 08:14:56.210250] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.737 [2024-07-13 08:14:56.210263] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.737 [2024-07-13 08:14:56.210270] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.210277] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c240) on tqpair=0xd25ae0 00:28:04.737 [2024-07-13 08:14:56.210285] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:04.737 [2024-07-13 08:14:56.210301] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.210310] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.210316] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd25ae0) 00:28:04.737 [2024-07-13 08:14:56.210327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.737 [2024-07-13 08:14:56.210347] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c240, cid 0, qid 0 00:28:04.737 [2024-07-13 08:14:56.210475] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.737 [2024-07-13 08:14:56.210487] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.737 [2024-07-13 08:14:56.210494] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.210501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c240) on tqpair=0xd25ae0 00:28:04.737 [2024-07-13 08:14:56.210508] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:04.737 [2024-07-13 08:14:56.210516] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:04.737 [2024-07-13 08:14:56.210529] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:04.737 [2024-07-13 08:14:56.210543] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:04.737 [2024-07-13 08:14:56.210556] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.210564] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd25ae0) 00:28:04.737 [2024-07-13 08:14:56.210574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.737 [2024-07-13 08:14:56.210595] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c240, cid 0, qid 0 00:28:04.737 [2024-07-13 08:14:56.210759] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.737 [2024-07-13 08:14:56.210771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.737 [2024-07-13 08:14:56.210778] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.210788] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd25ae0): datao=0, datal=4096, cccid=0 00:28:04.737 [2024-07-13 08:14:56.210797] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd7c240) on tqpair(0xd25ae0): expected_datao=0, payload_size=4096 00:28:04.737 [2024-07-13 08:14:56.210804] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.210821] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.210830] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.252875] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.737 [2024-07-13 08:14:56.252894] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.737 [2024-07-13 08:14:56.252902] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.252908] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c240) on tqpair=0xd25ae0 00:28:04.737 [2024-07-13 08:14:56.252920] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:04.737 [2024-07-13 08:14:56.252932] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:04.737 [2024-07-13 08:14:56.252941] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:04.737 [2024-07-13 08:14:56.252948] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:04.737 [2024-07-13 08:14:56.252955] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:04.737 [2024-07-13 08:14:56.252963] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:04.737 [2024-07-13 08:14:56.252978] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:04.737 [2024-07-13 08:14:56.252990] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.252998] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.253004] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd25ae0) 00:28:04.737 [2024-07-13 08:14:56.253015] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:04.737 [2024-07-13 08:14:56.253038] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c240, cid 0, qid 0 00:28:04.737 [2024-07-13 08:14:56.253195] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.737 [2024-07-13 08:14:56.253208] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.737 [2024-07-13 08:14:56.253215] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.253221] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c240) on tqpair=0xd25ae0 00:28:04.737 [2024-07-13 08:14:56.253232] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.253239] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.253246] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd25ae0) 00:28:04.737 [2024-07-13 08:14:56.253256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.737 [2024-07-13 08:14:56.253266] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.253273] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.253279] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd25ae0) 00:28:04.737 [2024-07-13 08:14:56.253288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.737 [2024-07-13 08:14:56.253298] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.253309] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.253316] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd25ae0) 00:28:04.737 [2024-07-13 08:14:56.253325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.737 [2024-07-13 08:14:56.253335] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.253342] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.253363] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd25ae0) 00:28:04.737 [2024-07-13 08:14:56.253372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.737 [2024-07-13 08:14:56.253380] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:04.737 [2024-07-13 08:14:56.253398] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:04.737 [2024-07-13 08:14:56.253410] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.253417] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd25ae0) 00:28:04.737 [2024-07-13 08:14:56.253427] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.737 [2024-07-13 08:14:56.253448] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c240, cid 0, qid 0 00:28:04.737 [2024-07-13 08:14:56.253474] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c3c0, cid 1, qid 0 00:28:04.737 [2024-07-13 08:14:56.253482] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c540, cid 2, qid 0 00:28:04.737 [2024-07-13 08:14:56.253490] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c6c0, cid 3, qid 0 00:28:04.737 [2024-07-13 08:14:56.253497] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c840, cid 4, qid 0 00:28:04.737 [2024-07-13 08:14:56.253671] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.737 [2024-07-13 08:14:56.253683] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.737 [2024-07-13 08:14:56.253690] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.253697] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c840) on tqpair=0xd25ae0 00:28:04.737 [2024-07-13 08:14:56.253705] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:04.737 [2024-07-13 08:14:56.253713] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:04.737 [2024-07-13 08:14:56.253727] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:04.737 [2024-07-13 08:14:56.253739] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:04.737 [2024-07-13 08:14:56.253750] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.253772] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.253778] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd25ae0) 00:28:04.737 [2024-07-13 08:14:56.253788] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:04.737 [2024-07-13 08:14:56.253809] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c840, cid 4, qid 0 00:28:04.737 [2024-07-13 08:14:56.253986] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.737 [2024-07-13 08:14:56.254003] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.737 [2024-07-13 08:14:56.254009] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.254022] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c840) on tqpair=0xd25ae0 00:28:04.737 [2024-07-13 08:14:56.254087] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:04.737 [2024-07-13 08:14:56.254106] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:04.737 [2024-07-13 08:14:56.254121] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.737 [2024-07-13 08:14:56.254128] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd25ae0) 00:28:04.737 [2024-07-13 08:14:56.254139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.737 [2024-07-13 08:14:56.254160] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c840, cid 4, qid 0 00:28:04.738 [2024-07-13 08:14:56.254395] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.738 [2024-07-13 08:14:56.254411] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.738 [2024-07-13 08:14:56.254418] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.254425] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd25ae0): datao=0, datal=4096, cccid=4 00:28:04.738 [2024-07-13 08:14:56.254433] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd7c840) on tqpair(0xd25ae0): expected_datao=0, payload_size=4096 00:28:04.738 [2024-07-13 08:14:56.254440] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.254450] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.254458] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.254485] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.738 [2024-07-13 08:14:56.254496] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.738 [2024-07-13 08:14:56.254503] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.254509] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c840) on tqpair=0xd25ae0 00:28:04.738 [2024-07-13 08:14:56.254526] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:04.738 [2024-07-13 08:14:56.254548] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:04.738 [2024-07-13 08:14:56.254566] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:04.738 [2024-07-13 08:14:56.254579] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.254587] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd25ae0) 00:28:04.738 [2024-07-13 08:14:56.254597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.738 [2024-07-13 08:14:56.254619] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c840, cid 4, qid 0 00:28:04.738 [2024-07-13 08:14:56.254778] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.738 [2024-07-13 08:14:56.254790] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.738 [2024-07-13 08:14:56.254797] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.254803] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd25ae0): datao=0, datal=4096, cccid=4 00:28:04.738 [2024-07-13 08:14:56.254810] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd7c840) on tqpair(0xd25ae0): expected_datao=0, payload_size=4096 00:28:04.738 [2024-07-13 08:14:56.254818] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.254828] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.254835] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.254873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.738 [2024-07-13 08:14:56.254887] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.738 [2024-07-13 08:14:56.254893] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.254900] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c840) on tqpair=0xd25ae0 00:28:04.738 [2024-07-13 08:14:56.254923] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:04.738 [2024-07-13 08:14:56.254942] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:04.738 [2024-07-13 08:14:56.254956] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.254963] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd25ae0) 00:28:04.738 [2024-07-13 08:14:56.254974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.738 [2024-07-13 08:14:56.254996] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c840, cid 4, qid 0 00:28:04.738 [2024-07-13 08:14:56.255126] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.738 [2024-07-13 08:14:56.255142] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.738 [2024-07-13 08:14:56.255148] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.255155] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd25ae0): datao=0, datal=4096, cccid=4 00:28:04.738 [2024-07-13 08:14:56.255162] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd7c840) on tqpair(0xd25ae0): expected_datao=0, payload_size=4096 00:28:04.738 [2024-07-13 08:14:56.255170] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.255180] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.255187] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.255211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.738 [2024-07-13 08:14:56.255222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.738 [2024-07-13 08:14:56.255229] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.255235] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c840) on tqpair=0xd25ae0 00:28:04.738 [2024-07-13 08:14:56.255248] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:04.738 [2024-07-13 08:14:56.255263] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:04.738 [2024-07-13 08:14:56.255278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:04.738 [2024-07-13 08:14:56.255290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:04.738 [2024-07-13 08:14:56.255298] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:04.738 [2024-07-13 08:14:56.255308] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:04.738 [2024-07-13 08:14:56.255316] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:04.738 [2024-07-13 08:14:56.255324] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:04.738 [2024-07-13 08:14:56.255332] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:04.738 [2024-07-13 08:14:56.255358] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.255367] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd25ae0) 00:28:04.738 [2024-07-13 08:14:56.255377] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.738 [2024-07-13 08:14:56.255389] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.255411] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.255418] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd25ae0) 00:28:04.738 [2024-07-13 08:14:56.255426] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:04.738 [2024-07-13 08:14:56.255450] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c840, cid 4, qid 0 00:28:04.738 [2024-07-13 08:14:56.255476] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c9c0, cid 5, qid 0 00:28:04.738 [2024-07-13 08:14:56.255642] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.738 [2024-07-13 08:14:56.255655] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.738 [2024-07-13 08:14:56.255661] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.255668] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c840) on tqpair=0xd25ae0 00:28:04.738 [2024-07-13 08:14:56.255679] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.738 [2024-07-13 08:14:56.255688] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.738 [2024-07-13 08:14:56.255695] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.255701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c9c0) on tqpair=0xd25ae0 00:28:04.738 [2024-07-13 08:14:56.255717] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.255725] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd25ae0) 00:28:04.738 [2024-07-13 08:14:56.255736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.738 [2024-07-13 08:14:56.255756] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c9c0, cid 5, qid 0 00:28:04.738 [2024-07-13 08:14:56.255992] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.738 [2024-07-13 08:14:56.256009] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.738 [2024-07-13 08:14:56.256016] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.256023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c9c0) on tqpair=0xd25ae0 00:28:04.738 [2024-07-13 08:14:56.256039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.256048] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd25ae0) 00:28:04.738 [2024-07-13 08:14:56.256059] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.738 [2024-07-13 08:14:56.256080] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c9c0, cid 5, qid 0 00:28:04.738 [2024-07-13 08:14:56.256207] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.738 [2024-07-13 08:14:56.256220] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.738 [2024-07-13 08:14:56.256226] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.256233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c9c0) on tqpair=0xd25ae0 00:28:04.738 [2024-07-13 08:14:56.256248] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.256258] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd25ae0) 00:28:04.738 [2024-07-13 08:14:56.256268] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.738 [2024-07-13 08:14:56.256292] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c9c0, cid 5, qid 0 00:28:04.738 [2024-07-13 08:14:56.256405] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.738 [2024-07-13 08:14:56.256420] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.738 [2024-07-13 08:14:56.256427] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.256434] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c9c0) on tqpair=0xd25ae0 00:28:04.738 [2024-07-13 08:14:56.256458] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.256468] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd25ae0) 00:28:04.738 [2024-07-13 08:14:56.256479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.738 [2024-07-13 08:14:56.256490] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.256498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd25ae0) 00:28:04.738 [2024-07-13 08:14:56.256507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.738 [2024-07-13 08:14:56.256518] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.738 [2024-07-13 08:14:56.256525] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xd25ae0) 00:28:04.739 [2024-07-13 08:14:56.256535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.739 [2024-07-13 08:14:56.256546] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.739 [2024-07-13 08:14:56.256553] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd25ae0) 00:28:04.739 [2024-07-13 08:14:56.256563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.739 [2024-07-13 08:14:56.256599] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c9c0, cid 5, qid 0 00:28:04.739 [2024-07-13 08:14:56.256610] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c840, cid 4, qid 0 00:28:04.739 [2024-07-13 08:14:56.256618] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7cb40, cid 6, qid 0 00:28:04.739 [2024-07-13 08:14:56.256625] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7ccc0, cid 7, qid 0 00:28:04.739 [2024-07-13 08:14:56.256895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.739 [2024-07-13 08:14:56.256911] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.739 [2024-07-13 08:14:56.256918] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.739 [2024-07-13 08:14:56.256924] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd25ae0): datao=0, datal=8192, cccid=5 00:28:04.739 [2024-07-13 08:14:56.256932] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd7c9c0) on tqpair(0xd25ae0): expected_datao=0, payload_size=8192 00:28:04.739 [2024-07-13 08:14:56.256940] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.739 [2024-07-13 08:14:56.257013] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.739 [2024-07-13 08:14:56.257024] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.739 [2024-07-13 08:14:56.257033] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.739 [2024-07-13 08:14:56.257042] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.739 [2024-07-13 08:14:56.257048] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.739 [2024-07-13 08:14:56.257055] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd25ae0): datao=0, datal=512, cccid=4 00:28:04.739 [2024-07-13 08:14:56.257062] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd7c840) on tqpair(0xd25ae0): expected_datao=0, payload_size=512 00:28:04.739 [2024-07-13 08:14:56.257073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.739 [2024-07-13 08:14:56.257083] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.739 [2024-07-13 08:14:56.257090] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.739 [2024-07-13 08:14:56.257099] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.739 [2024-07-13 08:14:56.257108] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.739 [2024-07-13 08:14:56.257114] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.739 [2024-07-13 08:14:56.257120] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd25ae0): datao=0, datal=512, cccid=6 00:28:04.739 [2024-07-13 08:14:56.257128] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd7cb40) on tqpair(0xd25ae0): expected_datao=0, payload_size=512 00:28:04.739 [2024-07-13 08:14:56.257135] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.739 [2024-07-13 08:14:56.257144] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.739 [2024-07-13 08:14:56.257151] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.739 [2024-07-13 08:14:56.257160] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:04.739 [2024-07-13 08:14:56.257169] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:04.739 [2024-07-13 08:14:56.257175] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:04.739 [2024-07-13 08:14:56.257181] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd25ae0): datao=0, datal=4096, cccid=7 00:28:04.739 [2024-07-13 08:14:56.257189] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd7ccc0) on tqpair(0xd25ae0): expected_datao=0, payload_size=4096 00:28:04.739 [2024-07-13 08:14:56.257196] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.739 [2024-07-13 08:14:56.257205] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:04.739 [2024-07-13 08:14:56.257213] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:04.739 [2024-07-13 08:14:56.257224] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.739 [2024-07-13 08:14:56.257234] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.739 [2024-07-13 08:14:56.257240] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.739 [2024-07-13 08:14:56.257247] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c9c0) on tqpair=0xd25ae0 00:28:04.739 [2024-07-13 08:14:56.257265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.739 [2024-07-13 08:14:56.257277] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.739 [2024-07-13 08:14:56.257283] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.739 [2024-07-13 08:14:56.257304] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c840) on tqpair=0xd25ae0 00:28:04.739 [2024-07-13 08:14:56.257319] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.739 [2024-07-13 08:14:56.257329] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.739 [2024-07-13 08:14:56.257335] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.739 [2024-07-13 08:14:56.257341] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7cb40) on tqpair=0xd25ae0 00:28:04.739 [2024-07-13 08:14:56.257351] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.739 [2024-07-13 08:14:56.257360] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.739 [2024-07-13 08:14:56.257366] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.739 [2024-07-13 08:14:56.257372] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7ccc0) on tqpair=0xd25ae0 00:28:04.739 ===================================================== 00:28:04.739 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:04.739 ===================================================== 00:28:04.739 Controller Capabilities/Features 00:28:04.739 ================================ 00:28:04.739 Vendor ID: 8086 00:28:04.739 Subsystem Vendor ID: 8086 00:28:04.739 Serial Number: SPDK00000000000001 00:28:04.739 Model Number: SPDK bdev Controller 00:28:04.739 Firmware Version: 24.09 00:28:04.739 Recommended Arb Burst: 6 00:28:04.739 IEEE OUI Identifier: e4 d2 5c 00:28:04.739 Multi-path I/O 00:28:04.739 May have multiple subsystem ports: Yes 00:28:04.739 May have multiple controllers: Yes 00:28:04.739 Associated with SR-IOV VF: No 00:28:04.739 Max Data Transfer Size: 131072 00:28:04.739 Max Number of Namespaces: 32 00:28:04.739 Max Number of I/O Queues: 127 00:28:04.739 NVMe Specification Version (VS): 1.3 00:28:04.739 NVMe Specification Version (Identify): 1.3 00:28:04.739 Maximum Queue Entries: 128 00:28:04.739 Contiguous Queues Required: Yes 00:28:04.739 Arbitration Mechanisms Supported 00:28:04.739 Weighted Round Robin: Not Supported 00:28:04.739 Vendor Specific: Not Supported 00:28:04.739 Reset Timeout: 15000 ms 00:28:04.739 Doorbell Stride: 4 bytes 00:28:04.739 NVM Subsystem Reset: Not Supported 00:28:04.739 Command Sets Supported 00:28:04.739 NVM Command Set: Supported 00:28:04.739 Boot Partition: Not Supported 00:28:04.739 Memory Page Size Minimum: 4096 bytes 00:28:04.739 Memory Page Size Maximum: 4096 bytes 00:28:04.739 Persistent Memory Region: Not Supported 00:28:04.739 Optional Asynchronous Events Supported 00:28:04.739 Namespace Attribute Notices: Supported 00:28:04.739 Firmware Activation Notices: Not Supported 00:28:04.739 ANA Change Notices: Not Supported 00:28:04.739 PLE Aggregate Log Change Notices: Not Supported 00:28:04.739 LBA Status Info Alert Notices: Not Supported 00:28:04.739 EGE Aggregate Log Change Notices: Not Supported 00:28:04.739 Normal NVM Subsystem Shutdown event: Not Supported 00:28:04.739 Zone Descriptor Change Notices: Not Supported 00:28:04.739 Discovery Log Change Notices: Not Supported 00:28:04.739 Controller Attributes 00:28:04.739 128-bit Host Identifier: Supported 00:28:04.739 Non-Operational Permissive Mode: Not Supported 00:28:04.739 NVM Sets: Not Supported 00:28:04.739 Read Recovery Levels: Not Supported 00:28:04.739 Endurance Groups: Not Supported 00:28:04.739 Predictable Latency Mode: Not Supported 00:28:04.739 Traffic Based Keep ALive: Not Supported 00:28:04.739 Namespace Granularity: Not Supported 00:28:04.739 SQ Associations: Not Supported 00:28:04.739 UUID List: Not Supported 00:28:04.739 Multi-Domain Subsystem: Not Supported 00:28:04.739 Fixed Capacity Management: Not Supported 00:28:04.739 Variable Capacity Management: Not Supported 00:28:04.739 Delete Endurance Group: Not Supported 00:28:04.739 Delete NVM Set: Not Supported 00:28:04.739 Extended LBA Formats Supported: Not Supported 00:28:04.739 Flexible Data Placement Supported: Not Supported 00:28:04.739 00:28:04.739 Controller Memory Buffer Support 00:28:04.739 ================================ 00:28:04.739 Supported: No 00:28:04.739 00:28:04.739 Persistent Memory Region Support 00:28:04.739 ================================ 00:28:04.739 Supported: No 00:28:04.739 00:28:04.739 Admin Command Set Attributes 00:28:04.739 ============================ 00:28:04.739 Security Send/Receive: Not Supported 00:28:04.739 Format NVM: Not Supported 00:28:04.739 Firmware Activate/Download: Not Supported 00:28:04.739 Namespace Management: Not Supported 00:28:04.739 Device Self-Test: Not Supported 00:28:04.739 Directives: Not Supported 00:28:04.739 NVMe-MI: Not Supported 00:28:04.739 Virtualization Management: Not Supported 00:28:04.739 Doorbell Buffer Config: Not Supported 00:28:04.739 Get LBA Status Capability: Not Supported 00:28:04.739 Command & Feature Lockdown Capability: Not Supported 00:28:04.739 Abort Command Limit: 4 00:28:04.739 Async Event Request Limit: 4 00:28:04.739 Number of Firmware Slots: N/A 00:28:04.739 Firmware Slot 1 Read-Only: N/A 00:28:04.739 Firmware Activation Without Reset: N/A 00:28:04.739 Multiple Update Detection Support: N/A 00:28:04.739 Firmware Update Granularity: No Information Provided 00:28:04.739 Per-Namespace SMART Log: No 00:28:04.739 Asymmetric Namespace Access Log Page: Not Supported 00:28:04.739 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:04.739 Command Effects Log Page: Supported 00:28:04.739 Get Log Page Extended Data: Supported 00:28:04.739 Telemetry Log Pages: Not Supported 00:28:04.739 Persistent Event Log Pages: Not Supported 00:28:04.739 Supported Log Pages Log Page: May Support 00:28:04.739 Commands Supported & Effects Log Page: Not Supported 00:28:04.739 Feature Identifiers & Effects Log Page:May Support 00:28:04.739 NVMe-MI Commands & Effects Log Page: May Support 00:28:04.739 Data Area 4 for Telemetry Log: Not Supported 00:28:04.739 Error Log Page Entries Supported: 128 00:28:04.740 Keep Alive: Supported 00:28:04.740 Keep Alive Granularity: 10000 ms 00:28:04.740 00:28:04.740 NVM Command Set Attributes 00:28:04.740 ========================== 00:28:04.740 Submission Queue Entry Size 00:28:04.740 Max: 64 00:28:04.740 Min: 64 00:28:04.740 Completion Queue Entry Size 00:28:04.740 Max: 16 00:28:04.740 Min: 16 00:28:04.740 Number of Namespaces: 32 00:28:04.740 Compare Command: Supported 00:28:04.740 Write Uncorrectable Command: Not Supported 00:28:04.740 Dataset Management Command: Supported 00:28:04.740 Write Zeroes Command: Supported 00:28:04.740 Set Features Save Field: Not Supported 00:28:04.740 Reservations: Supported 00:28:04.740 Timestamp: Not Supported 00:28:04.740 Copy: Supported 00:28:04.740 Volatile Write Cache: Present 00:28:04.740 Atomic Write Unit (Normal): 1 00:28:04.740 Atomic Write Unit (PFail): 1 00:28:04.740 Atomic Compare & Write Unit: 1 00:28:04.740 Fused Compare & Write: Supported 00:28:04.740 Scatter-Gather List 00:28:04.740 SGL Command Set: Supported 00:28:04.740 SGL Keyed: Supported 00:28:04.740 SGL Bit Bucket Descriptor: Not Supported 00:28:04.740 SGL Metadata Pointer: Not Supported 00:28:04.740 Oversized SGL: Not Supported 00:28:04.740 SGL Metadata Address: Not Supported 00:28:04.740 SGL Offset: Supported 00:28:04.740 Transport SGL Data Block: Not Supported 00:28:04.740 Replay Protected Memory Block: Not Supported 00:28:04.740 00:28:04.740 Firmware Slot Information 00:28:04.740 ========================= 00:28:04.740 Active slot: 1 00:28:04.740 Slot 1 Firmware Revision: 24.09 00:28:04.740 00:28:04.740 00:28:04.740 Commands Supported and Effects 00:28:04.740 ============================== 00:28:04.740 Admin Commands 00:28:04.740 -------------- 00:28:04.740 Get Log Page (02h): Supported 00:28:04.740 Identify (06h): Supported 00:28:04.740 Abort (08h): Supported 00:28:04.740 Set Features (09h): Supported 00:28:04.740 Get Features (0Ah): Supported 00:28:04.740 Asynchronous Event Request (0Ch): Supported 00:28:04.740 Keep Alive (18h): Supported 00:28:04.740 I/O Commands 00:28:04.740 ------------ 00:28:04.740 Flush (00h): Supported LBA-Change 00:28:04.740 Write (01h): Supported LBA-Change 00:28:04.740 Read (02h): Supported 00:28:04.740 Compare (05h): Supported 00:28:04.740 Write Zeroes (08h): Supported LBA-Change 00:28:04.740 Dataset Management (09h): Supported LBA-Change 00:28:04.740 Copy (19h): Supported LBA-Change 00:28:04.740 00:28:04.740 Error Log 00:28:04.740 ========= 00:28:04.740 00:28:04.740 Arbitration 00:28:04.740 =========== 00:28:04.740 Arbitration Burst: 1 00:28:04.740 00:28:04.740 Power Management 00:28:04.740 ================ 00:28:04.740 Number of Power States: 1 00:28:04.740 Current Power State: Power State #0 00:28:04.740 Power State #0: 00:28:04.740 Max Power: 0.00 W 00:28:04.740 Non-Operational State: Operational 00:28:04.740 Entry Latency: Not Reported 00:28:04.740 Exit Latency: Not Reported 00:28:04.740 Relative Read Throughput: 0 00:28:04.740 Relative Read Latency: 0 00:28:04.740 Relative Write Throughput: 0 00:28:04.740 Relative Write Latency: 0 00:28:04.740 Idle Power: Not Reported 00:28:04.740 Active Power: Not Reported 00:28:04.740 Non-Operational Permissive Mode: Not Supported 00:28:04.740 00:28:04.740 Health Information 00:28:04.740 ================== 00:28:04.740 Critical Warnings: 00:28:04.740 Available Spare Space: OK 00:28:04.740 Temperature: OK 00:28:04.740 Device Reliability: OK 00:28:04.740 Read Only: No 00:28:04.740 Volatile Memory Backup: OK 00:28:04.740 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:04.740 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:04.740 Available Spare: 0% 00:28:04.740 Available Spare Threshold: 0% 00:28:04.740 Life Percentage Used:[2024-07-13 08:14:56.257483] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.740 [2024-07-13 08:14:56.257494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xd25ae0) 00:28:04.740 [2024-07-13 08:14:56.257504] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.740 [2024-07-13 08:14:56.257528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7ccc0, cid 7, qid 0 00:28:04.740 [2024-07-13 08:14:56.257704] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.740 [2024-07-13 08:14:56.257717] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.740 [2024-07-13 08:14:56.257724] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.740 [2024-07-13 08:14:56.257731] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7ccc0) on tqpair=0xd25ae0 00:28:04.740 [2024-07-13 08:14:56.257779] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:04.740 [2024-07-13 08:14:56.257799] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c240) on tqpair=0xd25ae0 00:28:04.740 [2024-07-13 08:14:56.257809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.740 [2024-07-13 08:14:56.257818] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c3c0) on tqpair=0xd25ae0 00:28:04.740 [2024-07-13 08:14:56.257826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.740 [2024-07-13 08:14:56.257848] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c540) on tqpair=0xd25ae0 00:28:04.740 [2024-07-13 08:14:56.257856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.740 [2024-07-13 08:14:56.257864] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c6c0) on tqpair=0xd25ae0 00:28:04.740 [2024-07-13 08:14:56.257879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:04.740 [2024-07-13 08:14:56.257891] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.740 [2024-07-13 08:14:56.257913] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.740 [2024-07-13 08:14:56.257920] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd25ae0) 00:28:04.740 [2024-07-13 08:14:56.257931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.740 [2024-07-13 08:14:56.257954] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c6c0, cid 3, qid 0 00:28:04.740 [2024-07-13 08:14:56.258107] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.740 [2024-07-13 08:14:56.258122] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.740 [2024-07-13 08:14:56.258129] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.740 [2024-07-13 08:14:56.258136] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c6c0) on tqpair=0xd25ae0 00:28:04.740 [2024-07-13 08:14:56.258147] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.740 [2024-07-13 08:14:56.258154] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.740 [2024-07-13 08:14:56.258161] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd25ae0) 00:28:04.740 [2024-07-13 08:14:56.258171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.740 [2024-07-13 08:14:56.258197] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c6c0, cid 3, qid 0 00:28:04.740 [2024-07-13 08:14:56.258334] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.740 [2024-07-13 08:14:56.258347] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.740 [2024-07-13 08:14:56.258354] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.740 [2024-07-13 08:14:56.258361] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c6c0) on tqpair=0xd25ae0 00:28:04.740 [2024-07-13 08:14:56.258368] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:04.740 [2024-07-13 08:14:56.258376] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:04.740 [2024-07-13 08:14:56.258391] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.740 [2024-07-13 08:14:56.258404] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.740 [2024-07-13 08:14:56.258411] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd25ae0) 00:28:04.740 [2024-07-13 08:14:56.258422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.740 [2024-07-13 08:14:56.258442] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c6c0, cid 3, qid 0 00:28:04.741 [2024-07-13 08:14:56.258555] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.741 [2024-07-13 08:14:56.258570] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.741 [2024-07-13 08:14:56.258577] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.741 [2024-07-13 08:14:56.258584] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c6c0) on tqpair=0xd25ae0 00:28:04.741 [2024-07-13 08:14:56.258600] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.741 [2024-07-13 08:14:56.258609] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.741 [2024-07-13 08:14:56.258616] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd25ae0) 00:28:04.741 [2024-07-13 08:14:56.258626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.741 [2024-07-13 08:14:56.258647] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c6c0, cid 3, qid 0 00:28:04.741 [2024-07-13 08:14:56.258772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.741 [2024-07-13 08:14:56.258784] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.741 [2024-07-13 08:14:56.258791] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.741 [2024-07-13 08:14:56.258798] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c6c0) on tqpair=0xd25ae0 00:28:04.741 [2024-07-13 08:14:56.258813] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.741 [2024-07-13 08:14:56.258822] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.741 [2024-07-13 08:14:56.258829] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd25ae0) 00:28:04.741 [2024-07-13 08:14:56.258839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.741 [2024-07-13 08:14:56.258859] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c6c0, cid 3, qid 0 00:28:04.741 [2024-07-13 08:14:56.262887] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.741 [2024-07-13 08:14:56.262902] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.741 [2024-07-13 08:14:56.262909] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.741 [2024-07-13 08:14:56.262915] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c6c0) on tqpair=0xd25ae0 00:28:04.741 [2024-07-13 08:14:56.262933] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:04.741 [2024-07-13 08:14:56.262942] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:04.741 [2024-07-13 08:14:56.262949] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd25ae0) 00:28:04.741 [2024-07-13 08:14:56.262959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.741 [2024-07-13 08:14:56.262981] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd7c6c0, cid 3, qid 0 00:28:04.741 [2024-07-13 08:14:56.263137] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:04.741 [2024-07-13 08:14:56.263153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:04.741 [2024-07-13 08:14:56.263159] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:04.741 [2024-07-13 08:14:56.263166] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xd7c6c0) on tqpair=0xd25ae0 00:28:04.741 [2024-07-13 08:14:56.263180] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:28:04.741 0% 00:28:04.741 Data Units Read: 0 00:28:04.741 Data Units Written: 0 00:28:04.741 Host Read Commands: 0 00:28:04.741 Host Write Commands: 0 00:28:04.741 Controller Busy Time: 0 minutes 00:28:04.741 Power Cycles: 0 00:28:04.741 Power On Hours: 0 hours 00:28:04.741 Unsafe Shutdowns: 0 00:28:04.741 Unrecoverable Media Errors: 0 00:28:04.741 Lifetime Error Log Entries: 0 00:28:04.741 Warning Temperature Time: 0 minutes 00:28:04.741 Critical Temperature Time: 0 minutes 00:28:04.741 00:28:04.741 Number of Queues 00:28:04.741 ================ 00:28:04.741 Number of I/O Submission Queues: 127 00:28:04.741 Number of I/O Completion Queues: 127 00:28:04.741 00:28:04.741 Active Namespaces 00:28:04.741 ================= 00:28:04.741 Namespace ID:1 00:28:04.741 Error Recovery Timeout: Unlimited 00:28:04.741 Command Set Identifier: NVM (00h) 00:28:04.741 Deallocate: Supported 00:28:04.741 Deallocated/Unwritten Error: Not Supported 00:28:04.741 Deallocated Read Value: Unknown 00:28:04.741 Deallocate in Write Zeroes: Not Supported 00:28:04.741 Deallocated Guard Field: 0xFFFF 00:28:04.741 Flush: Supported 00:28:04.741 Reservation: Supported 00:28:04.741 Namespace Sharing Capabilities: Multiple Controllers 00:28:04.741 Size (in LBAs): 131072 (0GiB) 00:28:04.741 Capacity (in LBAs): 131072 (0GiB) 00:28:04.741 Utilization (in LBAs): 131072 (0GiB) 00:28:04.741 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:04.741 EUI64: ABCDEF0123456789 00:28:04.741 UUID: bca4e04a-0257-4388-be37-ea86a0caedba 00:28:04.741 Thin Provisioning: Not Supported 00:28:04.741 Per-NS Atomic Units: Yes 00:28:04.741 Atomic Boundary Size (Normal): 0 00:28:04.741 Atomic Boundary Size (PFail): 0 00:28:04.741 Atomic Boundary Offset: 0 00:28:04.741 Maximum Single Source Range Length: 65535 00:28:04.741 Maximum Copy Length: 65535 00:28:04.741 Maximum Source Range Count: 1 00:28:04.741 NGUID/EUI64 Never Reused: No 00:28:04.741 Namespace Write Protected: No 00:28:04.741 Number of LBA Formats: 1 00:28:04.741 Current LBA Format: LBA Format #00 00:28:04.741 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:04.741 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:04.741 rmmod nvme_tcp 00:28:04.741 rmmod nvme_fabrics 00:28:04.741 rmmod nvme_keyring 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2051412 ']' 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2051412 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 2051412 ']' 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 2051412 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2051412 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2051412' 00:28:04.741 killing process with pid 2051412 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 2051412 00:28:04.741 08:14:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 2051412 00:28:05.000 08:14:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:05.000 08:14:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:05.000 08:14:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:05.000 08:14:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:05.000 08:14:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:05.000 08:14:56 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.000 08:14:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.000 08:14:56 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.563 08:14:58 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:07.563 00:28:07.563 real 0m5.175s 00:28:07.563 user 0m4.075s 00:28:07.563 sys 0m1.766s 00:28:07.563 08:14:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:07.563 08:14:58 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:07.563 ************************************ 00:28:07.563 END TEST nvmf_identify 00:28:07.563 ************************************ 00:28:07.563 08:14:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:07.563 08:14:58 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:07.563 08:14:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:07.563 08:14:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:07.563 08:14:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:07.563 ************************************ 00:28:07.563 START TEST nvmf_perf 00:28:07.563 ************************************ 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:07.563 * Looking for test storage... 00:28:07.563 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:07.563 08:14:58 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:07.564 08:14:58 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:08.944 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:08.945 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:08.945 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:08.945 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:08.945 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:08.945 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:09.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:09.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:28:09.203 00:28:09.203 --- 10.0.0.2 ping statistics --- 00:28:09.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.203 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:09.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:09.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:28:09.203 00:28:09.203 --- 10.0.0.1 ping statistics --- 00:28:09.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.203 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2053460 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2053460 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 2053460 ']' 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:09.203 08:15:00 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:09.203 [2024-07-13 08:15:00.817376] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:28:09.203 [2024-07-13 08:15:00.817451] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.204 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.204 [2024-07-13 08:15:00.890237] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:09.461 [2024-07-13 08:15:00.986453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.461 [2024-07-13 08:15:00.986512] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.461 [2024-07-13 08:15:00.986537] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.461 [2024-07-13 08:15:00.986551] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.461 [2024-07-13 08:15:00.986563] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.461 [2024-07-13 08:15:00.986676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.461 [2024-07-13 08:15:00.986731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:09.461 [2024-07-13 08:15:00.986803] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:09.461 [2024-07-13 08:15:00.986805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.461 08:15:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:09.461 08:15:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:28:09.461 08:15:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:09.461 08:15:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:09.461 08:15:01 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:09.461 08:15:01 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:09.461 08:15:01 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:09.461 08:15:01 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:12.743 08:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:12.743 08:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:13.002 08:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:13.002 08:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:13.259 08:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:13.259 08:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:13.259 08:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:13.259 08:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:13.259 08:15:04 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:13.518 [2024-07-13 08:15:05.051958] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:13.518 08:15:05 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:13.776 08:15:05 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:13.776 08:15:05 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:14.033 08:15:05 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:14.033 08:15:05 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:14.290 08:15:05 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:14.547 [2024-07-13 08:15:06.127861] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:14.547 08:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:14.806 08:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:14.806 08:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:14.806 08:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:14.806 08:15:06 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:16.181 Initializing NVMe Controllers 00:28:16.181 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:16.181 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:16.181 Initialization complete. Launching workers. 00:28:16.181 ======================================================== 00:28:16.181 Latency(us) 00:28:16.181 Device Information : IOPS MiB/s Average min max 00:28:16.181 PCIE (0000:88:00.0) NSID 1 from core 0: 84124.05 328.61 380.01 43.17 4342.72 00:28:16.181 ======================================================== 00:28:16.181 Total : 84124.05 328.61 380.01 43.17 4342.72 00:28:16.181 00:28:16.181 08:15:07 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:16.181 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.615 Initializing NVMe Controllers 00:28:17.615 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:17.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:17.615 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:17.615 Initialization complete. Launching workers. 00:28:17.615 ======================================================== 00:28:17.615 Latency(us) 00:28:17.615 Device Information : IOPS MiB/s Average min max 00:28:17.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 105.84 0.41 9673.90 211.88 45911.56 00:28:17.615 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 57.91 0.23 17542.08 6559.31 47901.49 00:28:17.616 ======================================================== 00:28:17.616 Total : 163.75 0.64 12456.55 211.88 47901.49 00:28:17.616 00:28:17.616 08:15:08 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:17.616 EAL: No free 2048 kB hugepages reported on node 1 00:28:18.988 Initializing NVMe Controllers 00:28:18.988 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:18.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:18.988 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:18.988 Initialization complete. Launching workers. 00:28:18.988 ======================================================== 00:28:18.988 Latency(us) 00:28:18.988 Device Information : IOPS MiB/s Average min max 00:28:18.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8489.94 33.16 3787.67 562.10 7593.21 00:28:18.988 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3877.97 15.15 8288.79 5788.78 16174.04 00:28:18.988 ======================================================== 00:28:18.988 Total : 12367.91 48.31 5199.00 562.10 16174.04 00:28:18.988 00:28:18.988 08:15:10 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:18.988 08:15:10 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:18.988 08:15:10 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:18.988 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.520 Initializing NVMe Controllers 00:28:21.520 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:21.520 Controller IO queue size 128, less than required. 00:28:21.520 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:21.520 Controller IO queue size 128, less than required. 00:28:21.520 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:21.520 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:21.520 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:21.520 Initialization complete. Launching workers. 00:28:21.520 ======================================================== 00:28:21.520 Latency(us) 00:28:21.520 Device Information : IOPS MiB/s Average min max 00:28:21.520 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1176.73 294.18 111683.39 64717.30 172634.68 00:28:21.520 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 569.88 142.47 232092.00 109514.96 350502.03 00:28:21.520 ======================================================== 00:28:21.520 Total : 1746.61 436.65 150970.27 64717.30 350502.03 00:28:21.520 00:28:21.520 08:15:12 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:21.520 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.785 No valid NVMe controllers or AIO or URING devices found 00:28:21.785 Initializing NVMe Controllers 00:28:21.785 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:21.785 Controller IO queue size 128, less than required. 00:28:21.785 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:21.785 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:21.785 Controller IO queue size 128, less than required. 00:28:21.785 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:21.785 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:21.785 WARNING: Some requested NVMe devices were skipped 00:28:21.785 08:15:13 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:21.785 EAL: No free 2048 kB hugepages reported on node 1 00:28:24.359 Initializing NVMe Controllers 00:28:24.359 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:24.360 Controller IO queue size 128, less than required. 00:28:24.360 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:24.360 Controller IO queue size 128, less than required. 00:28:24.360 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:24.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:24.360 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:24.360 Initialization complete. Launching workers. 00:28:24.360 00:28:24.360 ==================== 00:28:24.360 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:24.360 TCP transport: 00:28:24.360 polls: 22533 00:28:24.360 idle_polls: 10497 00:28:24.360 sock_completions: 12036 00:28:24.360 nvme_completions: 4757 00:28:24.360 submitted_requests: 7168 00:28:24.360 queued_requests: 1 00:28:24.360 00:28:24.360 ==================== 00:28:24.360 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:24.360 TCP transport: 00:28:24.360 polls: 19107 00:28:24.360 idle_polls: 7028 00:28:24.360 sock_completions: 12079 00:28:24.360 nvme_completions: 4905 00:28:24.360 submitted_requests: 7324 00:28:24.360 queued_requests: 1 00:28:24.360 ======================================================== 00:28:24.360 Latency(us) 00:28:24.360 Device Information : IOPS MiB/s Average min max 00:28:24.360 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1188.82 297.21 111139.91 56254.34 194223.65 00:28:24.360 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1225.82 306.45 104963.47 45892.30 159934.37 00:28:24.360 ======================================================== 00:28:24.360 Total : 2414.64 603.66 108004.38 45892.30 194223.65 00:28:24.360 00:28:24.360 08:15:15 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:24.360 08:15:15 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:24.360 08:15:16 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:24.360 08:15:16 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:24.360 08:15:16 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:28.547 08:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=215cc797-ca44-458f-a521-bf6a0e50f522 00:28:28.547 08:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 215cc797-ca44-458f-a521-bf6a0e50f522 00:28:28.547 08:15:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=215cc797-ca44-458f-a521-bf6a0e50f522 00:28:28.547 08:15:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:28.547 08:15:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:28.548 08:15:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:28.548 08:15:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:28.548 08:15:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:28.548 { 00:28:28.548 "uuid": "215cc797-ca44-458f-a521-bf6a0e50f522", 00:28:28.548 "name": "lvs_0", 00:28:28.548 "base_bdev": "Nvme0n1", 00:28:28.548 "total_data_clusters": 238234, 00:28:28.548 "free_clusters": 238234, 00:28:28.548 "block_size": 512, 00:28:28.548 "cluster_size": 4194304 00:28:28.548 } 00:28:28.548 ]' 00:28:28.548 08:15:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="215cc797-ca44-458f-a521-bf6a0e50f522") .free_clusters' 00:28:28.548 08:15:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:28:28.548 08:15:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="215cc797-ca44-458f-a521-bf6a0e50f522") .cluster_size' 00:28:28.548 08:15:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:28.548 08:15:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:28:28.548 08:15:19 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:28:28.548 952936 00:28:28.548 08:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:28.548 08:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:28.548 08:15:19 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 215cc797-ca44-458f-a521-bf6a0e50f522 lbd_0 20480 00:28:28.548 08:15:20 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=bd8b0dd2-f3aa-469d-8f8d-bc0e18094ff2 00:28:28.548 08:15:20 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore bd8b0dd2-f3aa-469d-8f8d-bc0e18094ff2 lvs_n_0 00:28:29.481 08:15:20 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=d0f93087-0ab7-4795-9940-88362c8d01ae 00:28:29.481 08:15:20 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb d0f93087-0ab7-4795-9940-88362c8d01ae 00:28:29.481 08:15:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=d0f93087-0ab7-4795-9940-88362c8d01ae 00:28:29.481 08:15:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:29.481 08:15:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:29.481 08:15:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:29.481 08:15:20 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:29.481 08:15:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:29.481 { 00:28:29.481 "uuid": "215cc797-ca44-458f-a521-bf6a0e50f522", 00:28:29.481 "name": "lvs_0", 00:28:29.481 "base_bdev": "Nvme0n1", 00:28:29.481 "total_data_clusters": 238234, 00:28:29.481 "free_clusters": 233114, 00:28:29.481 "block_size": 512, 00:28:29.481 "cluster_size": 4194304 00:28:29.481 }, 00:28:29.481 { 00:28:29.481 "uuid": "d0f93087-0ab7-4795-9940-88362c8d01ae", 00:28:29.481 "name": "lvs_n_0", 00:28:29.481 "base_bdev": "bd8b0dd2-f3aa-469d-8f8d-bc0e18094ff2", 00:28:29.481 "total_data_clusters": 5114, 00:28:29.481 "free_clusters": 5114, 00:28:29.481 "block_size": 512, 00:28:29.481 "cluster_size": 4194304 00:28:29.481 } 00:28:29.481 ]' 00:28:29.481 08:15:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="d0f93087-0ab7-4795-9940-88362c8d01ae") .free_clusters' 00:28:29.739 08:15:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:28:29.739 08:15:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="d0f93087-0ab7-4795-9940-88362c8d01ae") .cluster_size' 00:28:29.739 08:15:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:29.739 08:15:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:28:29.739 08:15:21 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:28:29.739 20456 00:28:29.739 08:15:21 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:29.739 08:15:21 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d0f93087-0ab7-4795-9940-88362c8d01ae lbd_nest_0 20456 00:28:29.996 08:15:21 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=829c5587-26b5-4e87-bc7a-b73936735bee 00:28:29.996 08:15:21 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:30.254 08:15:21 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:30.254 08:15:21 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 829c5587-26b5-4e87-bc7a-b73936735bee 00:28:30.511 08:15:22 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:30.770 08:15:22 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:30.770 08:15:22 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:30.770 08:15:22 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:30.770 08:15:22 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:30.770 08:15:22 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:30.770 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.973 Initializing NVMe Controllers 00:28:42.973 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:42.973 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:42.973 Initialization complete. Launching workers. 00:28:42.973 ======================================================== 00:28:42.973 Latency(us) 00:28:42.973 Device Information : IOPS MiB/s Average min max 00:28:42.973 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 42.40 0.02 23656.61 211.86 44781.47 00:28:42.973 ======================================================== 00:28:42.973 Total : 42.40 0.02 23656.61 211.86 44781.47 00:28:42.973 00:28:42.973 08:15:32 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:42.973 08:15:32 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:42.973 EAL: No free 2048 kB hugepages reported on node 1 00:28:52.956 Initializing NVMe Controllers 00:28:52.956 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:52.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:52.956 Initialization complete. Launching workers. 00:28:52.956 ======================================================== 00:28:52.956 Latency(us) 00:28:52.956 Device Information : IOPS MiB/s Average min max 00:28:52.956 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 71.40 8.92 14023.89 5062.80 47900.76 00:28:52.956 ======================================================== 00:28:52.956 Total : 71.40 8.92 14023.89 5062.80 47900.76 00:28:52.956 00:28:52.956 08:15:42 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:52.956 08:15:42 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:52.956 08:15:42 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:52.956 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.925 Initializing NVMe Controllers 00:29:02.925 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:02.925 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:02.925 Initialization complete. Launching workers. 00:29:02.925 ======================================================== 00:29:02.925 Latency(us) 00:29:02.925 Device Information : IOPS MiB/s Average min max 00:29:02.925 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7107.81 3.47 4502.88 305.28 12059.74 00:29:02.925 ======================================================== 00:29:02.925 Total : 7107.81 3.47 4502.88 305.28 12059.74 00:29:02.925 00:29:02.925 08:15:53 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:02.925 08:15:53 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:02.925 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.890 Initializing NVMe Controllers 00:29:12.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:12.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:12.890 Initialization complete. Launching workers. 00:29:12.890 ======================================================== 00:29:12.890 Latency(us) 00:29:12.890 Device Information : IOPS MiB/s Average min max 00:29:12.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1919.80 239.98 16683.15 1793.66 36539.74 00:29:12.890 ======================================================== 00:29:12.890 Total : 1919.80 239.98 16683.15 1793.66 36539.74 00:29:12.890 00:29:12.890 08:16:03 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:12.890 08:16:03 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:12.890 08:16:03 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:12.890 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.851 Initializing NVMe Controllers 00:29:22.851 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:22.851 Controller IO queue size 128, less than required. 00:29:22.851 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:22.851 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:22.851 Initialization complete. Launching workers. 00:29:22.851 ======================================================== 00:29:22.851 Latency(us) 00:29:22.851 Device Information : IOPS MiB/s Average min max 00:29:22.851 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11879.93 5.80 10776.86 1795.30 54207.97 00:29:22.851 ======================================================== 00:29:22.851 Total : 11879.93 5.80 10776.86 1795.30 54207.97 00:29:22.851 00:29:22.851 08:16:14 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:22.851 08:16:14 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:22.851 EAL: No free 2048 kB hugepages reported on node 1 00:29:32.836 Initializing NVMe Controllers 00:29:32.836 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:32.836 Controller IO queue size 128, less than required. 00:29:32.836 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:32.836 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:32.836 Initialization complete. Launching workers. 00:29:32.836 ======================================================== 00:29:32.836 Latency(us) 00:29:32.836 Device Information : IOPS MiB/s Average min max 00:29:32.836 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1208.78 151.10 106847.47 24368.42 214769.13 00:29:32.836 ======================================================== 00:29:32.836 Total : 1208.78 151.10 106847.47 24368.42 214769.13 00:29:32.836 00:29:33.093 08:16:24 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:33.350 08:16:24 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 829c5587-26b5-4e87-bc7a-b73936735bee 00:29:33.916 08:16:25 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:34.173 08:16:25 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bd8b0dd2-f3aa-469d-8f8d-bc0e18094ff2 00:29:34.739 08:16:26 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:34.739 08:16:26 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:34.739 08:16:26 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:34.739 08:16:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:34.739 08:16:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:34.739 08:16:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:34.739 08:16:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:34.739 08:16:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:34.739 08:16:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:34.739 rmmod nvme_tcp 00:29:34.739 rmmod nvme_fabrics 00:29:34.739 rmmod nvme_keyring 00:29:34.997 08:16:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:34.997 08:16:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:34.997 08:16:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:34.997 08:16:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2053460 ']' 00:29:34.997 08:16:26 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2053460 00:29:34.997 08:16:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 2053460 ']' 00:29:34.997 08:16:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 2053460 00:29:34.997 08:16:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:29:34.997 08:16:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:34.997 08:16:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2053460 00:29:34.997 08:16:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:34.997 08:16:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:34.997 08:16:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2053460' 00:29:34.997 killing process with pid 2053460 00:29:34.997 08:16:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 2053460 00:29:34.997 08:16:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 2053460 00:29:36.897 08:16:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:36.897 08:16:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:36.897 08:16:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:36.897 08:16:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:36.897 08:16:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:36.897 08:16:28 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:36.897 08:16:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:36.897 08:16:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.802 08:16:30 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:38.802 00:29:38.802 real 1m31.429s 00:29:38.802 user 5m38.280s 00:29:38.802 sys 0m16.016s 00:29:38.802 08:16:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:38.802 08:16:30 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:29:38.802 ************************************ 00:29:38.802 END TEST nvmf_perf 00:29:38.802 ************************************ 00:29:38.802 08:16:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:38.802 08:16:30 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:38.802 08:16:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:38.802 08:16:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:38.802 08:16:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:38.802 ************************************ 00:29:38.802 START TEST nvmf_fio_host 00:29:38.802 ************************************ 00:29:38.802 08:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:29:38.802 * Looking for test storage... 00:29:38.802 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:38.802 08:16:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:38.802 08:16:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:38.802 08:16:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:38.802 08:16:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:29:38.803 08:16:30 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:40.704 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:40.704 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:40.704 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:40.704 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:40.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:40.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:29:40.704 00:29:40.704 --- 10.0.0.2 ping statistics --- 00:29:40.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.704 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:40.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:40.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:29:40.704 00:29:40.704 --- 10.0.0.1 ping statistics --- 00:29:40.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:40.704 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:40.704 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:29:40.705 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:40.705 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:40.705 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:40.705 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:40.705 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:40.705 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:40.705 08:16:32 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:40.705 08:16:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:29:40.705 08:16:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:29:40.705 08:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:40.705 08:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.705 08:16:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2066137 00:29:40.705 08:16:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:40.705 08:16:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:40.705 08:16:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2066137 00:29:40.705 08:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 2066137 ']' 00:29:40.705 08:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:40.705 08:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:40.705 08:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:40.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:40.705 08:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:40.705 08:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:40.705 [2024-07-13 08:16:32.407218] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:29:40.705 [2024-07-13 08:16:32.407290] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:40.963 EAL: No free 2048 kB hugepages reported on node 1 00:29:40.963 [2024-07-13 08:16:32.470818] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:40.963 [2024-07-13 08:16:32.556178] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:40.963 [2024-07-13 08:16:32.556233] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:40.963 [2024-07-13 08:16:32.556261] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:40.963 [2024-07-13 08:16:32.556273] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:40.963 [2024-07-13 08:16:32.556282] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:40.963 [2024-07-13 08:16:32.556338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:40.963 [2024-07-13 08:16:32.556400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:40.963 [2024-07-13 08:16:32.556464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:40.963 [2024-07-13 08:16:32.556467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:40.963 08:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:40.963 08:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:29:40.963 08:16:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:41.221 [2024-07-13 08:16:32.910246] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.221 08:16:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:29:41.221 08:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:41.221 08:16:32 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.477 08:16:32 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:29:41.477 Malloc1 00:29:41.734 08:16:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:41.991 08:16:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:41.991 08:16:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:42.248 [2024-07-13 08:16:33.945555] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:42.248 08:16:33 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:42.506 08:16:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:42.506 08:16:34 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:42.506 08:16:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:42.506 08:16:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:42.506 08:16:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:42.506 08:16:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:42.506 08:16:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:42.506 08:16:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:42.506 08:16:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:42.506 08:16:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:42.506 08:16:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:42.506 08:16:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:42.506 08:16:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:42.506 08:16:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:42.506 08:16:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:42.506 08:16:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:42.506 08:16:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:42.506 08:16:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:42.506 08:16:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:42.764 08:16:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:42.764 08:16:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:42.764 08:16:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:42.764 08:16:34 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:42.764 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:42.764 fio-3.35 00:29:42.764 Starting 1 thread 00:29:42.764 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.291 00:29:45.291 test: (groupid=0, jobs=1): err= 0: pid=2066492: Sat Jul 13 08:16:36 2024 00:29:45.291 read: IOPS=9037, BW=35.3MiB/s (37.0MB/s)(70.8MiB/2006msec) 00:29:45.291 slat (usec): min=2, max=112, avg= 2.60, stdev= 1.43 00:29:45.291 clat (usec): min=2098, max=14101, avg=7824.37, stdev=588.08 00:29:45.291 lat (usec): min=2118, max=14103, avg=7826.98, stdev=587.99 00:29:45.291 clat percentiles (usec): 00:29:45.291 | 1.00th=[ 6456], 5.00th=[ 6915], 10.00th=[ 7111], 20.00th=[ 7373], 00:29:45.291 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:29:45.291 | 70.00th=[ 8094], 80.00th=[ 8291], 90.00th=[ 8455], 95.00th=[ 8717], 00:29:45.291 | 99.00th=[ 9110], 99.50th=[ 9241], 99.90th=[11994], 99.95th=[12911], 00:29:45.291 | 99.99th=[14091] 00:29:45.291 bw ( KiB/s): min=35328, max=36736, per=99.91%, avg=36116.00, stdev=583.85, samples=4 00:29:45.291 iops : min= 8832, max= 9184, avg=9029.00, stdev=145.96, samples=4 00:29:45.291 write: IOPS=9056, BW=35.4MiB/s (37.1MB/s)(71.0MiB/2006msec); 0 zone resets 00:29:45.291 slat (usec): min=2, max=103, avg= 2.70, stdev= 1.28 00:29:45.291 clat (usec): min=1647, max=12109, avg=6276.55, stdev=508.81 00:29:45.291 lat (usec): min=1654, max=12111, avg=6279.25, stdev=508.79 00:29:45.291 clat percentiles (usec): 00:29:45.291 | 1.00th=[ 5145], 5.00th=[ 5538], 10.00th=[ 5669], 20.00th=[ 5932], 00:29:45.291 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6259], 60.00th=[ 6390], 00:29:45.291 | 70.00th=[ 6521], 80.00th=[ 6652], 90.00th=[ 6849], 95.00th=[ 7046], 00:29:45.291 | 99.00th=[ 7373], 99.50th=[ 7504], 99.90th=[10028], 99.95th=[11076], 00:29:45.291 | 99.99th=[11994] 00:29:45.291 bw ( KiB/s): min=35968, max=36472, per=99.99%, avg=36222.00, stdev=230.79, samples=4 00:29:45.291 iops : min= 8992, max= 9118, avg=9055.50, stdev=57.70, samples=4 00:29:45.291 lat (msec) : 2=0.02%, 4=0.10%, 10=99.72%, 20=0.15% 00:29:45.291 cpu : usr=60.25%, sys=34.86%, ctx=74, majf=0, minf=32 00:29:45.291 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:29:45.291 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:45.291 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:45.291 issued rwts: total=18129,18167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:45.291 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:45.291 00:29:45.291 Run status group 0 (all jobs): 00:29:45.291 READ: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=70.8MiB (74.3MB), run=2006-2006msec 00:29:45.291 WRITE: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=71.0MiB (74.4MB), run=2006-2006msec 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:45.291 08:16:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:29:45.291 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:29:45.291 fio-3.35 00:29:45.291 Starting 1 thread 00:29:45.291 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.857 00:29:47.857 test: (groupid=0, jobs=1): err= 0: pid=2066828: Sat Jul 13 08:16:39 2024 00:29:47.857 read: IOPS=8359, BW=131MiB/s (137MB/s)(263MiB/2010msec) 00:29:47.857 slat (nsec): min=2905, max=94110, avg=3728.41, stdev=1638.25 00:29:47.857 clat (usec): min=2184, max=17992, avg=8968.52, stdev=2024.10 00:29:47.857 lat (usec): min=2188, max=17995, avg=8972.25, stdev=2024.14 00:29:47.857 clat percentiles (usec): 00:29:47.857 | 1.00th=[ 4883], 5.00th=[ 5800], 10.00th=[ 6390], 20.00th=[ 7177], 00:29:47.857 | 30.00th=[ 7767], 40.00th=[ 8356], 50.00th=[ 8979], 60.00th=[ 9372], 00:29:47.857 | 70.00th=[ 9896], 80.00th=[10683], 90.00th=[11731], 95.00th=[12387], 00:29:47.857 | 99.00th=[13829], 99.50th=[14615], 99.90th=[15139], 99.95th=[15270], 00:29:47.857 | 99.99th=[16712] 00:29:47.857 bw ( KiB/s): min=61184, max=75328, per=51.11%, avg=68360.00, stdev=8011.44, samples=4 00:29:47.857 iops : min= 3824, max= 4708, avg=4272.50, stdev=500.72, samples=4 00:29:47.857 write: IOPS=4882, BW=76.3MiB/s (80.0MB/s)(140MiB/1837msec); 0 zone resets 00:29:47.857 slat (usec): min=30, max=158, avg=33.88, stdev= 4.97 00:29:47.857 clat (usec): min=6404, max=18875, avg=11258.11, stdev=1959.70 00:29:47.857 lat (usec): min=6436, max=18908, avg=11292.00, stdev=1959.75 00:29:47.857 clat percentiles (usec): 00:29:47.857 | 1.00th=[ 7308], 5.00th=[ 8455], 10.00th=[ 8979], 20.00th=[ 9634], 00:29:47.857 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11076], 60.00th=[11469], 00:29:47.857 | 70.00th=[12125], 80.00th=[12780], 90.00th=[13960], 95.00th=[14877], 00:29:47.857 | 99.00th=[16712], 99.50th=[17171], 99.90th=[18220], 99.95th=[18482], 00:29:47.857 | 99.99th=[19006] 00:29:47.857 bw ( KiB/s): min=62336, max=78336, per=91.11%, avg=71184.00, stdev=8003.56, samples=4 00:29:47.857 iops : min= 3896, max= 4896, avg=4449.00, stdev=500.22, samples=4 00:29:47.857 lat (msec) : 4=0.10%, 10=55.85%, 20=44.06% 00:29:47.857 cpu : usr=72.42%, sys=23.89%, ctx=41, majf=0, minf=54 00:29:47.857 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:29:47.857 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:47.857 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:47.857 issued rwts: total=16803,8970,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:47.857 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:47.857 00:29:47.857 Run status group 0 (all jobs): 00:29:47.857 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=263MiB (275MB), run=2010-2010msec 00:29:47.857 WRITE: bw=76.3MiB/s (80.0MB/s), 76.3MiB/s-76.3MiB/s (80.0MB/s-80.0MB/s), io=140MiB (147MB), run=1837-1837msec 00:29:47.857 08:16:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:47.857 08:16:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:29:47.857 08:16:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:29:47.857 08:16:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:29:47.857 08:16:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:47.857 08:16:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:29:47.857 08:16:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:47.857 08:16:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:47.857 08:16:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:48.115 08:16:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:48.115 08:16:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:29:48.115 08:16:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:29:51.392 Nvme0n1 00:29:51.392 08:16:42 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:29:53.916 08:16:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=32d5a54a-e586-485e-9c22-87725cbd36c5 00:29:53.916 08:16:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 32d5a54a-e586-485e-9c22-87725cbd36c5 00:29:53.916 08:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=32d5a54a-e586-485e-9c22-87725cbd36c5 00:29:53.916 08:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:53.916 08:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:53.916 08:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:53.916 08:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:54.172 08:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:54.172 { 00:29:54.172 "uuid": "32d5a54a-e586-485e-9c22-87725cbd36c5", 00:29:54.172 "name": "lvs_0", 00:29:54.172 "base_bdev": "Nvme0n1", 00:29:54.172 "total_data_clusters": 930, 00:29:54.172 "free_clusters": 930, 00:29:54.172 "block_size": 512, 00:29:54.172 "cluster_size": 1073741824 00:29:54.172 } 00:29:54.172 ]' 00:29:54.172 08:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="32d5a54a-e586-485e-9c22-87725cbd36c5") .free_clusters' 00:29:54.172 08:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:29:54.172 08:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="32d5a54a-e586-485e-9c22-87725cbd36c5") .cluster_size' 00:29:54.429 08:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:29:54.429 08:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:29:54.429 08:16:45 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:29:54.429 952320 00:29:54.429 08:16:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:29:54.686 25abec39-a0b9-45df-a347-89a5c0ac9715 00:29:54.686 08:16:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:29:54.943 08:16:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:29:55.201 08:16:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:29:55.459 08:16:47 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:29:55.717 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:29:55.717 fio-3.35 00:29:55.717 Starting 1 thread 00:29:55.717 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.243 00:29:58.243 test: (groupid=0, jobs=1): err= 0: pid=2068108: Sat Jul 13 08:16:49 2024 00:29:58.243 read: IOPS=5517, BW=21.6MiB/s (22.6MB/s)(43.3MiB/2007msec) 00:29:58.243 slat (nsec): min=1959, max=161179, avg=2681.79, stdev=2167.61 00:29:58.243 clat (usec): min=1083, max=172100, avg=12814.87, stdev=12062.98 00:29:58.243 lat (usec): min=1086, max=172139, avg=12817.55, stdev=12063.26 00:29:58.243 clat percentiles (msec): 00:29:58.243 | 1.00th=[ 10], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:29:58.243 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 13], 00:29:58.243 | 70.00th=[ 13], 80.00th=[ 13], 90.00th=[ 14], 95.00th=[ 14], 00:29:58.243 | 99.00th=[ 15], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 174], 00:29:58.243 | 99.99th=[ 174] 00:29:58.243 bw ( KiB/s): min=15504, max=24248, per=99.66%, avg=21994.00, stdev=4328.11, samples=4 00:29:58.243 iops : min= 3876, max= 6062, avg=5498.50, stdev=1082.03, samples=4 00:29:58.243 write: IOPS=5480, BW=21.4MiB/s (22.4MB/s)(43.0MiB/2007msec); 0 zone resets 00:29:58.243 slat (usec): min=2, max=106, avg= 2.80, stdev= 1.57 00:29:58.243 clat (usec): min=396, max=169743, avg=10292.12, stdev=11353.06 00:29:58.243 lat (usec): min=398, max=169749, avg=10294.92, stdev=11353.30 00:29:58.243 clat percentiles (msec): 00:29:58.243 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 9], 00:29:58.243 | 30.00th=[ 10], 40.00th=[ 10], 50.00th=[ 10], 60.00th=[ 10], 00:29:58.243 | 70.00th=[ 10], 80.00th=[ 11], 90.00th=[ 11], 95.00th=[ 11], 00:29:58.243 | 99.00th=[ 12], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 169], 00:29:58.243 | 99.99th=[ 169] 00:29:58.243 bw ( KiB/s): min=16424, max=23936, per=99.88%, avg=21898.00, stdev=3653.94, samples=4 00:29:58.243 iops : min= 4106, max= 5984, avg=5474.50, stdev=913.49, samples=4 00:29:58.243 lat (usec) : 500=0.01%, 750=0.01% 00:29:58.243 lat (msec) : 2=0.03%, 4=0.09%, 10=38.49%, 20=60.77%, 50=0.03% 00:29:58.243 lat (msec) : 250=0.58% 00:29:58.243 cpu : usr=55.58%, sys=40.23%, ctx=85, majf=0, minf=32 00:29:58.243 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:29:58.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:29:58.243 issued rwts: total=11073,11000,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.243 latency : target=0, window=0, percentile=100.00%, depth=128 00:29:58.243 00:29:58.243 Run status group 0 (all jobs): 00:29:58.243 READ: bw=21.6MiB/s (22.6MB/s), 21.6MiB/s-21.6MiB/s (22.6MB/s-22.6MB/s), io=43.3MiB (45.4MB), run=2007-2007msec 00:29:58.243 WRITE: bw=21.4MiB/s (22.4MB/s), 21.4MiB/s-21.4MiB/s (22.4MB/s-22.4MB/s), io=43.0MiB (45.1MB), run=2007-2007msec 00:29:58.243 08:16:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:58.243 08:16:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:29:59.617 08:16:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=0eacc475-54f6-45a2-a2d5-2b6f50172a02 00:29:59.617 08:16:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 0eacc475-54f6-45a2-a2d5-2b6f50172a02 00:29:59.617 08:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=0eacc475-54f6-45a2-a2d5-2b6f50172a02 00:29:59.617 08:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:29:59.617 08:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:29:59.617 08:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:29:59.617 08:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:59.617 08:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:29:59.617 { 00:29:59.617 "uuid": "32d5a54a-e586-485e-9c22-87725cbd36c5", 00:29:59.617 "name": "lvs_0", 00:29:59.617 "base_bdev": "Nvme0n1", 00:29:59.617 "total_data_clusters": 930, 00:29:59.617 "free_clusters": 0, 00:29:59.617 "block_size": 512, 00:29:59.617 "cluster_size": 1073741824 00:29:59.617 }, 00:29:59.617 { 00:29:59.617 "uuid": "0eacc475-54f6-45a2-a2d5-2b6f50172a02", 00:29:59.617 "name": "lvs_n_0", 00:29:59.617 "base_bdev": "25abec39-a0b9-45df-a347-89a5c0ac9715", 00:29:59.617 "total_data_clusters": 237847, 00:29:59.617 "free_clusters": 237847, 00:29:59.617 "block_size": 512, 00:29:59.617 "cluster_size": 4194304 00:29:59.617 } 00:29:59.617 ]' 00:29:59.617 08:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="0eacc475-54f6-45a2-a2d5-2b6f50172a02") .free_clusters' 00:29:59.617 08:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:29:59.617 08:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="0eacc475-54f6-45a2-a2d5-2b6f50172a02") .cluster_size' 00:29:59.874 08:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:29:59.874 08:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:29:59.874 08:16:51 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:29:59.874 951388 00:29:59.874 08:16:51 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:00.440 0fa491f6-1634-4250-a80d-239230deba56 00:30:00.440 08:16:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:00.697 08:16:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:00.955 08:16:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:01.213 08:16:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:01.469 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:01.469 fio-3.35 00:30:01.469 Starting 1 thread 00:30:01.469 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.003 00:30:04.003 test: (groupid=0, jobs=1): err= 0: pid=2068835: Sat Jul 13 08:16:55 2024 00:30:04.003 read: IOPS=5840, BW=22.8MiB/s (23.9MB/s)(45.8MiB/2009msec) 00:30:04.003 slat (nsec): min=1965, max=141837, avg=2686.27, stdev=1897.04 00:30:04.003 clat (usec): min=4287, max=19298, avg=12124.62, stdev=1039.43 00:30:04.003 lat (usec): min=4292, max=19300, avg=12127.31, stdev=1039.35 00:30:04.003 clat percentiles (usec): 00:30:04.003 | 1.00th=[ 9634], 5.00th=[10552], 10.00th=[10945], 20.00th=[11338], 00:30:04.003 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12387], 00:30:04.003 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13435], 95.00th=[13698], 00:30:04.003 | 99.00th=[14484], 99.50th=[14746], 99.90th=[17695], 99.95th=[19006], 00:30:04.003 | 99.99th=[19268] 00:30:04.003 bw ( KiB/s): min=22096, max=23904, per=99.86%, avg=23330.00, stdev=830.83, samples=4 00:30:04.003 iops : min= 5524, max= 5976, avg=5832.50, stdev=207.71, samples=4 00:30:04.003 write: IOPS=5827, BW=22.8MiB/s (23.9MB/s)(45.7MiB/2009msec); 0 zone resets 00:30:04.003 slat (usec): min=2, max=127, avg= 2.76, stdev= 1.54 00:30:04.003 clat (usec): min=2060, max=17633, avg=9676.82, stdev=916.47 00:30:04.003 lat (usec): min=2065, max=17636, avg=9679.58, stdev=916.47 00:30:04.003 clat percentiles (usec): 00:30:04.003 | 1.00th=[ 7570], 5.00th=[ 8291], 10.00th=[ 8586], 20.00th=[ 8979], 00:30:04.003 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:30:04.003 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10683], 95.00th=[11076], 00:30:04.003 | 99.00th=[11600], 99.50th=[11994], 99.90th=[16188], 99.95th=[16450], 00:30:04.003 | 99.99th=[17433] 00:30:04.003 bw ( KiB/s): min=23120, max=23416, per=99.95%, avg=23298.00, stdev=128.40, samples=4 00:30:04.003 iops : min= 5780, max= 5854, avg=5824.50, stdev=32.10, samples=4 00:30:04.003 lat (msec) : 4=0.04%, 10=33.34%, 20=66.62% 00:30:04.003 cpu : usr=57.97%, sys=38.20%, ctx=85, majf=0, minf=32 00:30:04.003 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:04.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:04.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:04.003 issued rwts: total=11734,11707,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:04.003 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:04.003 00:30:04.003 Run status group 0 (all jobs): 00:30:04.003 READ: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.8MiB (48.1MB), run=2009-2009msec 00:30:04.003 WRITE: bw=22.8MiB/s (23.9MB/s), 22.8MiB/s-22.8MiB/s (23.9MB/s-23.9MB/s), io=45.7MiB (48.0MB), run=2009-2009msec 00:30:04.003 08:16:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:04.003 08:16:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:04.003 08:16:55 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:08.184 08:16:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:08.184 08:16:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:11.460 08:17:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:11.460 08:17:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:13.355 rmmod nvme_tcp 00:30:13.355 rmmod nvme_fabrics 00:30:13.355 rmmod nvme_keyring 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2066137 ']' 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2066137 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 2066137 ']' 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 2066137 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2066137 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2066137' 00:30:13.355 killing process with pid 2066137 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 2066137 00:30:13.355 08:17:04 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 2066137 00:30:13.355 08:17:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:13.355 08:17:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:13.355 08:17:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:13.355 08:17:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:13.355 08:17:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:13.355 08:17:05 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.355 08:17:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:13.355 08:17:05 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.883 08:17:07 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:15.883 00:30:15.883 real 0m36.889s 00:30:15.883 user 2m21.122s 00:30:15.883 sys 0m7.079s 00:30:15.883 08:17:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:15.883 08:17:07 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:15.883 ************************************ 00:30:15.883 END TEST nvmf_fio_host 00:30:15.883 ************************************ 00:30:15.883 08:17:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:15.883 08:17:07 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:15.883 08:17:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:15.883 08:17:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:15.883 08:17:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:15.883 ************************************ 00:30:15.883 START TEST nvmf_failover 00:30:15.883 ************************************ 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:15.883 * Looking for test storage... 00:30:15.883 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:15.883 08:17:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:17.788 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:17.788 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:17.788 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:17.788 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:17.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:17.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:30:17.788 00:30:17.788 --- 10.0.0.2 ping statistics --- 00:30:17.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.788 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:30:17.788 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:17.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:17.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:30:17.789 00:30:17.789 --- 10.0.0.1 ping statistics --- 00:30:17.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:17.789 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2072145 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2072145 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2072145 ']' 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:17.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:17.789 08:17:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:17.789 [2024-07-13 08:17:09.341691] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:30:17.789 [2024-07-13 08:17:09.341786] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:17.789 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.789 [2024-07-13 08:17:09.409261] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:17.789 [2024-07-13 08:17:09.502573] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:17.789 [2024-07-13 08:17:09.502620] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:17.789 [2024-07-13 08:17:09.502636] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:17.789 [2024-07-13 08:17:09.502650] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:17.789 [2024-07-13 08:17:09.502663] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:17.789 [2024-07-13 08:17:09.502758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:17.789 [2024-07-13 08:17:09.502890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:17.789 [2024-07-13 08:17:09.502892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:18.045 08:17:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:18.045 08:17:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:18.045 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:18.045 08:17:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:18.045 08:17:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:18.045 08:17:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:18.045 08:17:09 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:18.302 [2024-07-13 08:17:09.850144] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:18.302 08:17:09 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:18.558 Malloc0 00:30:18.558 08:17:10 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:18.815 08:17:10 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:19.072 08:17:10 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:19.329 [2024-07-13 08:17:10.889563] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:19.329 08:17:10 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:19.587 [2024-07-13 08:17:11.138350] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:19.587 08:17:11 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:19.845 [2024-07-13 08:17:11.383313] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:19.845 08:17:11 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2072371 00:30:19.845 08:17:11 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:19.845 08:17:11 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:19.845 08:17:11 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2072371 /var/tmp/bdevperf.sock 00:30:19.845 08:17:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2072371 ']' 00:30:19.845 08:17:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:19.845 08:17:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:19.845 08:17:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:19.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:19.845 08:17:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:19.845 08:17:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:20.106 08:17:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:20.106 08:17:11 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:20.106 08:17:11 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:20.690 NVMe0n1 00:30:20.690 08:17:12 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:20.948 00:30:20.948 08:17:12 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2072519 00:30:20.948 08:17:12 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:20.948 08:17:12 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:21.885 08:17:13 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:22.143 [2024-07-13 08:17:13.860108] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.143 [2024-07-13 08:17:13.860181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.143 [2024-07-13 08:17:13.860197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.143 [2024-07-13 08:17:13.860219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.143 [2024-07-13 08:17:13.860231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.143 [2024-07-13 08:17:13.860243] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.143 [2024-07-13 08:17:13.860254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.143 [2024-07-13 08:17:13.860266] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.143 [2024-07-13 08:17:13.860278] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860290] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860365] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860413] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860443] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860459] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860472] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860483] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860506] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860567] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860578] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860590] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.144 [2024-07-13 08:17:13.860601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127c270 is same with the state(5) to be set 00:30:22.402 08:17:13 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:25.695 08:17:16 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:25.695 00:30:25.695 08:17:17 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:25.954 08:17:17 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:29.242 08:17:20 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:29.242 [2024-07-13 08:17:20.844478] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:29.242 08:17:20 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:30.178 08:17:21 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:30.436 08:17:22 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 2072519 00:30:37.006 0 00:30:37.006 08:17:27 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 2072371 00:30:37.006 08:17:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2072371 ']' 00:30:37.006 08:17:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2072371 00:30:37.006 08:17:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:37.006 08:17:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:37.006 08:17:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2072371 00:30:37.006 08:17:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:37.006 08:17:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:37.006 08:17:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2072371' 00:30:37.006 killing process with pid 2072371 00:30:37.006 08:17:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2072371 00:30:37.006 08:17:27 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2072371 00:30:37.006 08:17:28 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:37.006 [2024-07-13 08:17:11.445946] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:30:37.006 [2024-07-13 08:17:11.446026] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2072371 ] 00:30:37.006 EAL: No free 2048 kB hugepages reported on node 1 00:30:37.006 [2024-07-13 08:17:11.507381] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.006 [2024-07-13 08:17:11.598643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.006 Running I/O for 15 seconds... 00:30:37.006 [2024-07-13 08:17:13.861458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.006 [2024-07-13 08:17:13.861501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.006 [2024-07-13 08:17:13.861530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.006 [2024-07-13 08:17:13.861546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.006 [2024-07-13 08:17:13.861563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.006 [2024-07-13 08:17:13.861578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.006 [2024-07-13 08:17:13.861594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.006 [2024-07-13 08:17:13.861609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.006 [2024-07-13 08:17:13.861624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.006 [2024-07-13 08:17:13.861638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.006 [2024-07-13 08:17:13.861654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.006 [2024-07-13 08:17:13.861668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.006 [2024-07-13 08:17:13.861683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.006 [2024-07-13 08:17:13.861698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.006 [2024-07-13 08:17:13.861713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.006 [2024-07-13 08:17:13.861727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.006 [2024-07-13 08:17:13.861743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.006 [2024-07-13 08:17:13.861757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.006 [2024-07-13 08:17:13.861773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.006 [2024-07-13 08:17:13.861787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.006 [2024-07-13 08:17:13.861802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:79056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.006 [2024-07-13 08:17:13.861816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.006 [2024-07-13 08:17:13.861839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:79064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.006 [2024-07-13 08:17:13.861858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.006 [2024-07-13 08:17:13.861882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.006 [2024-07-13 08:17:13.861898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.006 [2024-07-13 08:17:13.861913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.006 [2024-07-13 08:17:13.861928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.006 [2024-07-13 08:17:13.861943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.006 [2024-07-13 08:17:13.861957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.006 [2024-07-13 08:17:13.861973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.006 [2024-07-13 08:17:13.861987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.006 [2024-07-13 08:17:13.862003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.006 [2024-07-13 08:17:13.862017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.006 [2024-07-13 08:17:13.862033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:79112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:79120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:79128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:79200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:79208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:78576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.007 [2024-07-13 08:17:13.862484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:78584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.007 [2024-07-13 08:17:13.862513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:79224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:79232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:79256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:79264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:79280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:79288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:79312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:79328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:79336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.862974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.862989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.863007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.863024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.863038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.863054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.863067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.863082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.863096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.863112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:79376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.863126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.863141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.863178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.863194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:79392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.863208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.863222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:79400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.007 [2024-07-13 08:17:13.863235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.863250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:78592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.007 [2024-07-13 08:17:13.863264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.007 [2024-07-13 08:17:13.863294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:78600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.007 [2024-07-13 08:17:13.863308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.008 [2024-07-13 08:17:13.863337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.008 [2024-07-13 08:17:13.863366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:78624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.008 [2024-07-13 08:17:13.863396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.008 [2024-07-13 08:17:13.863430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:78640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.008 [2024-07-13 08:17:13.863460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:78648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.008 [2024-07-13 08:17:13.863489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:78656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.008 [2024-07-13 08:17:13.863518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.008 [2024-07-13 08:17:13.863547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:78672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.008 [2024-07-13 08:17:13.863576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.008 [2024-07-13 08:17:13.863605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:78688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.008 [2024-07-13 08:17:13.863634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.008 [2024-07-13 08:17:13.863663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.008 [2024-07-13 08:17:13.863692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.008 [2024-07-13 08:17:13.863721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.863750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.863779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:79424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.863812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.863841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.863898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.863929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.863958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.863973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.863987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.864003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.864016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.864032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.864046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.864061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.864074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.864089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.864103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.864119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.864132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.864147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.864170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.864186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.864203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.864219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.864235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.864251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.864265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.864280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.864294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.864309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.864323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.864338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.864352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.864367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.864386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.864401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.864415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.864431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.864445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.864460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.008 [2024-07-13 08:17:13.864473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.864489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.008 [2024-07-13 08:17:13.864502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.864518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.008 [2024-07-13 08:17:13.864532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.008 [2024-07-13 08:17:13.864547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.008 [2024-07-13 08:17:13.864561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.864576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:78744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.864594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.864610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:78752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.864624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.864640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.864653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.864669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:78768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.864683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.864698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:78776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.864712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.864727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:78784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.864741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.864757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.864771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.864786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:78800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.864800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.864815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.864829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.864844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:78816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.864874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.864891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:78824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.864905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.864920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.864934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.864949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:78840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.864962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.864985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.865000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.865016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:78856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.865029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.865045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.865058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.865074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.865087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.865103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:78880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.865116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.865132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:78888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.865145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.865161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.865174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.865189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:78904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.865204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.865219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.865232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.865247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.865261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.865276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:78928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.865290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.865305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.865319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.865334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.865352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.865367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:78952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.865382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.865397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:13.865411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.865441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:37.009 [2024-07-13 08:17:13.865457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:37.009 [2024-07-13 08:17:13.865469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78968 len:8 PRP1 0x0 PRP2 0x0 00:30:37.009 [2024-07-13 08:17:13.865482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.865543] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24cf760 was disconnected and freed. reset controller. 00:30:37.009 [2024-07-13 08:17:13.865562] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:37.009 [2024-07-13 08:17:13.865597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.009 [2024-07-13 08:17:13.865615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.865635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.009 [2024-07-13 08:17:13.865649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.865663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.009 [2024-07-13 08:17:13.865676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.865690] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.009 [2024-07-13 08:17:13.865704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:13.865717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:37.009 [2024-07-13 08:17:13.865776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249b830 (9): Bad file descriptor 00:30:37.009 [2024-07-13 08:17:13.869049] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:37.009 [2024-07-13 08:17:13.902943] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:37.009 [2024-07-13 08:17:17.552691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:78128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:17.552762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:17.552793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:78136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:17.552810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:17.552836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:17.552851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:17.552874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:78152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:17.552890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:17.552906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:78160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.009 [2024-07-13 08:17:17.552919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.009 [2024-07-13 08:17:17.552934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:78168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.552948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.552963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.552977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.552992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:78184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:78192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:78200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:78208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:78216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:78224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:78256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:78264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:78272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:78280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:78288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:78296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:78312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:78320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:78328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:78336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:78344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:78352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:78360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.010 [2024-07-13 08:17:17.553685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.010 [2024-07-13 08:17:17.553700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.010 [2024-07-13 08:17:17.553713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.553728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.553741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.553756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.553769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.553783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.553796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.553811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.553824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.553839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.553878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.553897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.553911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.553926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.553940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.553955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.553968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.553987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.011 [2024-07-13 08:17:17.554615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.011 [2024-07-13 08:17:17.554631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.554645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.554661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.554674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.554690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.554703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.554723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.554737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.554753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.554766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.554782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.554796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.554812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.554826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.554841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.554855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.554881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.554897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.554912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.554926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.554942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.554956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.554971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.554985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:78776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:78784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:78816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:78824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:78840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:78856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:78872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:78888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:78944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:78960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.012 [2024-07-13 08:17:17.555864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.012 [2024-07-13 08:17:17.555888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:78976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.013 [2024-07-13 08:17:17.555902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.555917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:78984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.013 [2024-07-13 08:17:17.555930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.555945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.013 [2024-07-13 08:17:17.555959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.555974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:79000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.013 [2024-07-13 08:17:17.555987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.556002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.013 [2024-07-13 08:17:17.556015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.556031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:79016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.013 [2024-07-13 08:17:17.556044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.556059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:79024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.013 [2024-07-13 08:17:17.556072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.556119] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:37.013 [2024-07-13 08:17:17.556137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79032 len:8 PRP1 0x0 PRP2 0x0 00:30:37.013 [2024-07-13 08:17:17.556150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.556168] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:37.013 [2024-07-13 08:17:17.556180] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:37.013 [2024-07-13 08:17:17.556192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79040 len:8 PRP1 0x0 PRP2 0x0 00:30:37.013 [2024-07-13 08:17:17.556204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.556217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:37.013 [2024-07-13 08:17:17.556227] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:37.013 [2024-07-13 08:17:17.556238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79048 len:8 PRP1 0x0 PRP2 0x0 00:30:37.013 [2024-07-13 08:17:17.556252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.556268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:37.013 [2024-07-13 08:17:17.556280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:37.013 [2024-07-13 08:17:17.556291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79056 len:8 PRP1 0x0 PRP2 0x0 00:30:37.013 [2024-07-13 08:17:17.556303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.556316] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:37.013 [2024-07-13 08:17:17.556327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:37.013 [2024-07-13 08:17:17.556338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79064 len:8 PRP1 0x0 PRP2 0x0 00:30:37.013 [2024-07-13 08:17:17.556350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.556363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:37.013 [2024-07-13 08:17:17.556373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:37.013 [2024-07-13 08:17:17.556384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79072 len:8 PRP1 0x0 PRP2 0x0 00:30:37.013 [2024-07-13 08:17:17.556397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.556409] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:37.013 [2024-07-13 08:17:17.556420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:37.013 [2024-07-13 08:17:17.556431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79080 len:8 PRP1 0x0 PRP2 0x0 00:30:37.013 [2024-07-13 08:17:17.556443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.556456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:37.013 [2024-07-13 08:17:17.556466] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:37.013 [2024-07-13 08:17:17.556477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79088 len:8 PRP1 0x0 PRP2 0x0 00:30:37.013 [2024-07-13 08:17:17.556489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.556502] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:37.013 [2024-07-13 08:17:17.556513] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:37.013 [2024-07-13 08:17:17.556524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79096 len:8 PRP1 0x0 PRP2 0x0 00:30:37.013 [2024-07-13 08:17:17.556536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.556549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:37.013 [2024-07-13 08:17:17.556560] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:37.013 [2024-07-13 08:17:17.556571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79104 len:8 PRP1 0x0 PRP2 0x0 00:30:37.013 [2024-07-13 08:17:17.556583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.556596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:37.013 [2024-07-13 08:17:17.556606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:37.013 [2024-07-13 08:17:17.556617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79112 len:8 PRP1 0x0 PRP2 0x0 00:30:37.013 [2024-07-13 08:17:17.556639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.556662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:37.013 [2024-07-13 08:17:17.556673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:37.013 [2024-07-13 08:17:17.556684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79120 len:8 PRP1 0x0 PRP2 0x0 00:30:37.013 [2024-07-13 08:17:17.556696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.556709] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:37.013 [2024-07-13 08:17:17.556720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:37.013 [2024-07-13 08:17:17.556731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79128 len:8 PRP1 0x0 PRP2 0x0 00:30:37.013 [2024-07-13 08:17:17.556743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.556756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:37.013 [2024-07-13 08:17:17.556767] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:37.013 [2024-07-13 08:17:17.556778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79136 len:8 PRP1 0x0 PRP2 0x0 00:30:37.013 [2024-07-13 08:17:17.556790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.556803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:37.013 [2024-07-13 08:17:17.556814] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:37.013 [2024-07-13 08:17:17.556825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79144 len:8 PRP1 0x0 PRP2 0x0 00:30:37.013 [2024-07-13 08:17:17.556837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.556850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:37.013 [2024-07-13 08:17:17.556861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:37.013 [2024-07-13 08:17:17.556880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78376 len:8 PRP1 0x0 PRP2 0x0 00:30:37.013 [2024-07-13 08:17:17.556893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.556907] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:37.013 [2024-07-13 08:17:17.556918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:37.013 [2024-07-13 08:17:17.556929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78384 len:8 PRP1 0x0 PRP2 0x0 00:30:37.013 [2024-07-13 08:17:17.556942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.557010] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24cb300 was disconnected and freed. reset controller. 00:30:37.013 [2024-07-13 08:17:17.557029] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:30:37.013 [2024-07-13 08:17:17.557063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.013 [2024-07-13 08:17:17.557081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.557096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.013 [2024-07-13 08:17:17.557113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.557128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.013 [2024-07-13 08:17:17.557142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.557156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.013 [2024-07-13 08:17:17.557169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.013 [2024-07-13 08:17:17.557182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:37.013 [2024-07-13 08:17:17.557222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249b830 (9): Bad file descriptor 00:30:37.013 [2024-07-13 08:17:17.560444] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:37.013 [2024-07-13 08:17:17.708959] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:37.013 [2024-07-13 08:17:22.106488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.014 [2024-07-13 08:17:22.106554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.106571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.014 [2024-07-13 08:17:22.106585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.106599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.014 [2024-07-13 08:17:22.106611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.106625] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:37.014 [2024-07-13 08:17:22.106638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.106651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249b830 is same with the state(5) to be set 00:30:37.014 [2024-07-13 08:17:22.107446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:128288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.014 [2024-07-13 08:17:22.107470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.107497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:128296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.014 [2024-07-13 08:17:22.107513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.107529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.014 [2024-07-13 08:17:22.107543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.107559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.014 [2024-07-13 08:17:22.107572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.107599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:128320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.014 [2024-07-13 08:17:22.107613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.107628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:128328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.014 [2024-07-13 08:17:22.107642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.107656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:128336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.014 [2024-07-13 08:17:22.107670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.107684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.014 [2024-07-13 08:17:22.107697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.107712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:127648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.107725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.107740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:127656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.107753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.107768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:127664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.107780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.107795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:127672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.107809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.107823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:127680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.107836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.107873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:127688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.107889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.107905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:127696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.107919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.107935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:127704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.107948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.107964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:127712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.107981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.107997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:127720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.108012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.108027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:127728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.108041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.108056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:127736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.108070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.108085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.108099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.108114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:127752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.108127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.108143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:127760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.108156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.108186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:127768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.108200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.108214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:127776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.108227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.108242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:127784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.108255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.108269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:127792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.108282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.108297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:127800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.108310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.108325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.108338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.108356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:127816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.108370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.108384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:127824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.108398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.108412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:127832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.108425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.108440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:127840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.108453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.108468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.108481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.108496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:127856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.108510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.108525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:127864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.108538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.108553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:127872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.108566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.108581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:127880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.014 [2024-07-13 08:17:22.108593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.014 [2024-07-13 08:17:22.108608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:127888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.015 [2024-07-13 08:17:22.108621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.108636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:127896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.015 [2024-07-13 08:17:22.108649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.108663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.108677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.108692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.108705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.108723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.108737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.108752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.108766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.108780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:128384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.108793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.108808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:128392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.108821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.108836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.108850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.108872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:128408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.108903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.108919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:128416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.108933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.108949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:128424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.108963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.108979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.108993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:128440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.109023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:128448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.109052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.109081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:128464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.109114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:128472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.109144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.109188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:128488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.109217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:128496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.109261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:128504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.109290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:128512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.109318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.109347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:128528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.109375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.109404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:128544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.109433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:128552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.109461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.109491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.109524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.109553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.109581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.109610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.015 [2024-07-13 08:17:22.109638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.015 [2024-07-13 08:17:22.109668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.015 [2024-07-13 08:17:22.109697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.015 [2024-07-13 08:17:22.109712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.109725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.109741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.109754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.109769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.109783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.109798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:127944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.109812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.109827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.109840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.109855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:127960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.109880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.109897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:127968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.109912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.109927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:127976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.109941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.109956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.109970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.109986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:128000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:128008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:128032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:128056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:128104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:128112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:128144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:128184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.016 [2024-07-13 08:17:22.110814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:128608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.016 [2024-07-13 08:17:22.110842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:128616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.016 [2024-07-13 08:17:22.110877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:128624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.016 [2024-07-13 08:17:22.110909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:128632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.016 [2024-07-13 08:17:22.110938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.016 [2024-07-13 08:17:22.110953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:128640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.016 [2024-07-13 08:17:22.110967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.017 [2024-07-13 08:17:22.110982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:128648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.017 [2024-07-13 08:17:22.111000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.017 [2024-07-13 08:17:22.111016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.017 [2024-07-13 08:17:22.111030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.017 [2024-07-13 08:17:22.111045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:37.017 [2024-07-13 08:17:22.111059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.017 [2024-07-13 08:17:22.111074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:128224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.017 [2024-07-13 08:17:22.111088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.017 [2024-07-13 08:17:22.111103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.017 [2024-07-13 08:17:22.111116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.017 [2024-07-13 08:17:22.111132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.017 [2024-07-13 08:17:22.111145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.017 [2024-07-13 08:17:22.111160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.017 [2024-07-13 08:17:22.111174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.017 [2024-07-13 08:17:22.111189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:128256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.017 [2024-07-13 08:17:22.111203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.017 [2024-07-13 08:17:22.111218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.017 [2024-07-13 08:17:22.111232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.017 [2024-07-13 08:17:22.111247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:37.017 [2024-07-13 08:17:22.111260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.017 [2024-07-13 08:17:22.111274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x24cb300 is same with the state(5) to be set 00:30:37.017 [2024-07-13 08:17:22.111291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:37.017 [2024-07-13 08:17:22.111303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:37.017 [2024-07-13 08:17:22.111314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128280 len:8 PRP1 0x0 PRP2 0x0 00:30:37.017 [2024-07-13 08:17:22.111326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:37.017 [2024-07-13 08:17:22.111386] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x24cb300 was disconnected and freed. reset controller. 00:30:37.017 [2024-07-13 08:17:22.111404] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:30:37.017 [2024-07-13 08:17:22.111424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:37.017 [2024-07-13 08:17:22.114683] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:37.017 [2024-07-13 08:17:22.114722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249b830 (9): Bad file descriptor 00:30:37.017 [2024-07-13 08:17:22.232071] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:37.017 00:30:37.017 Latency(us) 00:30:37.017 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.017 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:37.017 Verification LBA range: start 0x0 length 0x4000 00:30:37.017 NVMe0n1 : 15.01 8116.75 31.71 775.47 0.00 14367.17 843.47 16117.00 00:30:37.017 =================================================================================================================== 00:30:37.017 Total : 8116.75 31.71 775.47 0.00 14367.17 843.47 16117.00 00:30:37.017 Received shutdown signal, test time was about 15.000000 seconds 00:30:37.017 00:30:37.017 Latency(us) 00:30:37.017 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:37.017 =================================================================================================================== 00:30:37.017 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:37.017 08:17:28 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:30:37.017 08:17:28 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:30:37.017 08:17:28 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:30:37.017 08:17:28 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2074346 00:30:37.017 08:17:28 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:30:37.017 08:17:28 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2074346 /var/tmp/bdevperf.sock 00:30:37.017 08:17:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 2074346 ']' 00:30:37.017 08:17:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:37.017 08:17:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:37.017 08:17:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:37.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:37.017 08:17:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:37.017 08:17:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:37.017 08:17:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:37.017 08:17:28 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:37.017 08:17:28 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:37.017 [2024-07-13 08:17:28.521579] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:37.017 08:17:28 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:37.275 [2024-07-13 08:17:28.770274] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:37.275 08:17:28 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:37.533 NVMe0n1 00:30:37.533 08:17:29 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:37.791 00:30:37.791 08:17:29 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:38.358 00:30:38.358 08:17:29 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:38.358 08:17:29 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:30:38.617 08:17:30 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:38.880 08:17:30 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:30:42.170 08:17:33 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:42.170 08:17:33 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:30:42.170 08:17:33 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2075010 00:30:42.170 08:17:33 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:42.170 08:17:33 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 2075010 00:30:43.105 0 00:30:43.105 08:17:34 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:43.105 [2024-07-13 08:17:28.047464] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:30:43.105 [2024-07-13 08:17:28.047555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2074346 ] 00:30:43.105 EAL: No free 2048 kB hugepages reported on node 1 00:30:43.105 [2024-07-13 08:17:28.108265] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:43.105 [2024-07-13 08:17:28.191454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.105 [2024-07-13 08:17:30.331643] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:30:43.105 [2024-07-13 08:17:30.331768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.105 [2024-07-13 08:17:30.331793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.105 [2024-07-13 08:17:30.331814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.105 [2024-07-13 08:17:30.331827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.105 [2024-07-13 08:17:30.331840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.105 [2024-07-13 08:17:30.331878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.105 [2024-07-13 08:17:30.331894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:43.105 [2024-07-13 08:17:30.331919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:43.105 [2024-07-13 08:17:30.331934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:43.105 [2024-07-13 08:17:30.331987] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:43.105 [2024-07-13 08:17:30.332024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x74f830 (9): Bad file descriptor 00:30:43.105 [2024-07-13 08:17:30.464024] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:43.105 Running I/O for 1 seconds... 00:30:43.105 00:30:43.105 Latency(us) 00:30:43.105 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:43.105 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:43.105 Verification LBA range: start 0x0 length 0x4000 00:30:43.105 NVMe0n1 : 1.01 8555.48 33.42 0.00 0.00 14894.77 3082.62 15631.55 00:30:43.105 =================================================================================================================== 00:30:43.105 Total : 8555.48 33.42 0.00 0.00 14894.77 3082.62 15631.55 00:30:43.105 08:17:34 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:43.105 08:17:34 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:30:43.362 08:17:35 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:43.620 08:17:35 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:43.620 08:17:35 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:30:43.878 08:17:35 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:44.136 08:17:35 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:30:47.422 08:17:38 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:47.422 08:17:38 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:30:47.422 08:17:39 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 2074346 00:30:47.422 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2074346 ']' 00:30:47.422 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2074346 00:30:47.422 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:47.422 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:47.422 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2074346 00:30:47.422 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:47.422 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:47.422 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2074346' 00:30:47.422 killing process with pid 2074346 00:30:47.422 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2074346 00:30:47.422 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2074346 00:30:47.680 08:17:39 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:30:47.680 08:17:39 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:47.939 08:17:39 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:30:47.939 08:17:39 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:47.939 08:17:39 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:30:47.939 08:17:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:47.939 08:17:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:30:47.939 08:17:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:47.939 08:17:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:30:47.939 08:17:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:47.939 08:17:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:47.939 rmmod nvme_tcp 00:30:47.939 rmmod nvme_fabrics 00:30:47.939 rmmod nvme_keyring 00:30:48.197 08:17:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:48.197 08:17:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:30:48.197 08:17:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:30:48.197 08:17:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2072145 ']' 00:30:48.197 08:17:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2072145 00:30:48.197 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 2072145 ']' 00:30:48.197 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 2072145 00:30:48.197 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:30:48.197 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:48.197 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2072145 00:30:48.197 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:48.197 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:48.197 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2072145' 00:30:48.197 killing process with pid 2072145 00:30:48.197 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 2072145 00:30:48.197 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 2072145 00:30:48.456 08:17:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:48.456 08:17:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:48.456 08:17:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:48.456 08:17:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:48.456 08:17:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:48.456 08:17:39 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:48.456 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:48.456 08:17:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.365 08:17:41 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:50.365 00:30:50.365 real 0m34.824s 00:30:50.365 user 2m0.172s 00:30:50.365 sys 0m6.941s 00:30:50.365 08:17:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:50.365 08:17:41 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:50.365 ************************************ 00:30:50.365 END TEST nvmf_failover 00:30:50.365 ************************************ 00:30:50.365 08:17:42 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:50.365 08:17:42 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:50.365 08:17:42 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:50.365 08:17:42 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:50.365 08:17:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:50.365 ************************************ 00:30:50.365 START TEST nvmf_host_discovery 00:30:50.365 ************************************ 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:30:50.365 * Looking for test storage... 00:30:50.365 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:50.365 08:17:42 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:30:50.366 08:17:42 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:52.283 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:52.283 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:52.283 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:52.283 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:52.283 08:17:43 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:52.283 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:52.283 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:52.283 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:52.541 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:52.541 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:52.541 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:52.541 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:52.541 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:30:52.541 00:30:52.541 --- 10.0.0.2 ping statistics --- 00:30:52.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.541 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:30:52.541 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:52.541 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:52.541 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:30:52.541 00:30:52.541 --- 10.0.0.1 ping statistics --- 00:30:52.541 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:52.541 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:30:52.541 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:52.541 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:30:52.541 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:52.541 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:52.541 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:52.541 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:52.541 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:52.541 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:52.541 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:52.541 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:30:52.541 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:52.541 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:52.541 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:52.541 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2077612 00:30:52.541 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2077612 00:30:52.542 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:30:52.542 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2077612 ']' 00:30:52.542 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:52.542 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:52.542 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:52.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:52.542 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:52.542 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:52.542 [2024-07-13 08:17:44.141408] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:30:52.542 [2024-07-13 08:17:44.141490] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:52.542 EAL: No free 2048 kB hugepages reported on node 1 00:30:52.542 [2024-07-13 08:17:44.213905] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.799 [2024-07-13 08:17:44.303456] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:52.800 [2024-07-13 08:17:44.303517] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:52.800 [2024-07-13 08:17:44.303532] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:52.800 [2024-07-13 08:17:44.303546] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:52.800 [2024-07-13 08:17:44.303565] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:52.800 [2024-07-13 08:17:44.303596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:52.800 [2024-07-13 08:17:44.446011] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:52.800 [2024-07-13 08:17:44.454149] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:52.800 null0 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:52.800 null1 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2077748 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2077748 /tmp/host.sock 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 2077748 ']' 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:30:52.800 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:52.800 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:52.800 [2024-07-13 08:17:44.524240] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:30:52.800 [2024-07-13 08:17:44.524315] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2077748 ] 00:30:53.059 EAL: No free 2048 kB hugepages reported on node 1 00:30:53.059 [2024-07-13 08:17:44.585825] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:53.059 [2024-07-13 08:17:44.676381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:53.318 08:17:44 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.318 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:30:53.318 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:30:53.318 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:53.318 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:53.318 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.318 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.318 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:53.318 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:53.318 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.578 [2024-07-13 08:17:45.067779] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:30:53.578 08:17:45 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:30:54.147 [2024-07-13 08:17:45.854718] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:54.147 [2024-07-13 08:17:45.854764] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:54.147 [2024-07-13 08:17:45.854793] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:54.406 [2024-07-13 08:17:45.982186] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:30:54.664 [2024-07-13 08:17:46.167125] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:54.664 [2024-07-13 08:17:46.167154] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.664 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.665 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.925 [2024-07-13 08:17:46.540234] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:54.925 [2024-07-13 08:17:46.540473] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:54.925 [2024-07-13 08:17:46.540510] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:54.925 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:54.926 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:55.185 [2024-07-13 08:17:46.667336] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:30:55.185 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:30:55.185 08:17:46 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:30:55.443 [2024-07-13 08:17:46.929649] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:55.443 [2024-07-13 08:17:46.929677] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:30:55.443 [2024-07-13 08:17:46.929688] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.011 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.270 [2024-07-13 08:17:47.764318] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:30:56.270 [2024-07-13 08:17:47.764352] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:56.270 [2024-07-13 08:17:47.768342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.270 [2024-07-13 08:17:47.768376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.270 [2024-07-13 08:17:47.768395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:30:56.270 [2024-07-13 08:17:47.768411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.270 [2024-07-13 08:17:47.768427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.270 [2024-07-13 08:17:47.768455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.270 [2024-07-13 08:17:47.768470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:56.270 [2024-07-13 08:17:47.768483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:56.270 [2024-07-13 08:17:47.768496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219a530 is same with the state(5) to be set 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:56.270 [2024-07-13 08:17:47.778359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219a530 (9): Bad file descriptor 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.270 [2024-07-13 08:17:47.788397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:56.270 [2024-07-13 08:17:47.788700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.270 [2024-07-13 08:17:47.788732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219a530 with addr=10.0.0.2, port=4420 00:30:56.270 [2024-07-13 08:17:47.788751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219a530 is same with the state(5) to be set 00:30:56.270 [2024-07-13 08:17:47.788776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219a530 (9): Bad file descriptor 00:30:56.270 [2024-07-13 08:17:47.788816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:56.270 [2024-07-13 08:17:47.788834] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:56.270 [2024-07-13 08:17:47.788850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:56.270 [2024-07-13 08:17:47.788894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.270 [2024-07-13 08:17:47.798486] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:56.270 [2024-07-13 08:17:47.798698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.270 [2024-07-13 08:17:47.798728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219a530 with addr=10.0.0.2, port=4420 00:30:56.270 [2024-07-13 08:17:47.798746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219a530 is same with the state(5) to be set 00:30:56.270 [2024-07-13 08:17:47.798770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219a530 (9): Bad file descriptor 00:30:56.270 [2024-07-13 08:17:47.798792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:56.270 [2024-07-13 08:17:47.798806] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:56.270 [2024-07-13 08:17:47.798821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:56.270 [2024-07-13 08:17:47.798841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:30:56.270 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:56.271 [2024-07-13 08:17:47.808579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.271 [2024-07-13 08:17:47.809665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.271 [2024-07-13 08:17:47.809700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219a530 with addr=10.0.0.2, port=4420 00:30:56.271 [2024-07-13 08:17:47.809720] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219a530 is same with the state(5) to be set 00:30:56.271 [2024-07-13 08:17:47.809746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219a530 (9): Bad file descriptor 00:30:56.271 [2024-07-13 08:17:47.809820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:56.271 [2024-07-13 08:17:47.809848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:56.271 [2024-07-13 08:17:47.809886] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:56.271 [2024-07-13 08:17:47.809908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.271 [2024-07-13 08:17:47.818660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:56.271 [2024-07-13 08:17:47.818876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.271 [2024-07-13 08:17:47.818924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219a530 with addr=10.0.0.2, port=4420 00:30:56.271 [2024-07-13 08:17:47.818941] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219a530 is same with the state(5) to be set 00:30:56.271 [2024-07-13 08:17:47.818963] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219a530 (9): Bad file descriptor 00:30:56.271 [2024-07-13 08:17:47.818983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:56.271 [2024-07-13 08:17:47.818997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:56.271 [2024-07-13 08:17:47.819010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:56.271 [2024-07-13 08:17:47.819028] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.271 [2024-07-13 08:17:47.828740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:56.271 [2024-07-13 08:17:47.828963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.271 [2024-07-13 08:17:47.828991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219a530 with addr=10.0.0.2, port=4420 00:30:56.271 [2024-07-13 08:17:47.829007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219a530 is same with the state(5) to be set 00:30:56.271 [2024-07-13 08:17:47.829029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219a530 (9): Bad file descriptor 00:30:56.271 [2024-07-13 08:17:47.829049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:56.271 [2024-07-13 08:17:47.829062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:56.271 [2024-07-13 08:17:47.829074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:56.271 [2024-07-13 08:17:47.829093] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.271 [2024-07-13 08:17:47.838820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:56.271 [2024-07-13 08:17:47.839019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.271 [2024-07-13 08:17:47.839046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219a530 with addr=10.0.0.2, port=4420 00:30:56.271 [2024-07-13 08:17:47.839062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219a530 is same with the state(5) to be set 00:30:56.271 [2024-07-13 08:17:47.839083] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219a530 (9): Bad file descriptor 00:30:56.271 [2024-07-13 08:17:47.839103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:56.271 [2024-07-13 08:17:47.839115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:56.271 [2024-07-13 08:17:47.839128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:56.271 [2024-07-13 08:17:47.839147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:30:56.271 [2024-07-13 08:17:47.848905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.271 [2024-07-13 08:17:47.849058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:56.271 [2024-07-13 08:17:47.849086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x219a530 with addr=10.0.0.2, port=4420 00:30:56.271 [2024-07-13 08:17:47.849103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219a530 is same with the state(5) to be set 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:30:56.271 [2024-07-13 08:17:47.849124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x219a530 (9): Bad file descriptor 00:30:56.271 [2024-07-13 08:17:47.849145] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:56.271 [2024-07-13 08:17:47.849158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:30:56.271 [2024-07-13 08:17:47.849192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:56.271 [2024-07-13 08:17:47.849215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:56.271 [2024-07-13 08:17:47.851778] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:30:56.271 [2024-07-13 08:17:47.851811] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:30:56.271 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:56.272 08:17:47 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:56.530 08:17:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:57.467 [2024-07-13 08:17:49.113048] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:30:57.467 [2024-07-13 08:17:49.113073] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:30:57.467 [2024-07-13 08:17:49.113095] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:30:57.467 [2024-07-13 08:17:49.199405] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:30:58.035 [2024-07-13 08:17:49.469406] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:30:58.035 [2024-07-13 08:17:49.469446] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:58.035 request: 00:30:58.035 { 00:30:58.035 "name": "nvme", 00:30:58.035 "trtype": "tcp", 00:30:58.035 "traddr": "10.0.0.2", 00:30:58.035 "adrfam": "ipv4", 00:30:58.035 "trsvcid": "8009", 00:30:58.035 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:58.035 "wait_for_attach": true, 00:30:58.035 "method": "bdev_nvme_start_discovery", 00:30:58.035 "req_id": 1 00:30:58.035 } 00:30:58.035 Got JSON-RPC error response 00:30:58.035 response: 00:30:58.035 { 00:30:58.035 "code": -17, 00:30:58.035 "message": "File exists" 00:30:58.035 } 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:58.035 request: 00:30:58.035 { 00:30:58.035 "name": "nvme_second", 00:30:58.035 "trtype": "tcp", 00:30:58.035 "traddr": "10.0.0.2", 00:30:58.035 "adrfam": "ipv4", 00:30:58.035 "trsvcid": "8009", 00:30:58.035 "hostnqn": "nqn.2021-12.io.spdk:test", 00:30:58.035 "wait_for_attach": true, 00:30:58.035 "method": "bdev_nvme_start_discovery", 00:30:58.035 "req_id": 1 00:30:58.035 } 00:30:58.035 Got JSON-RPC error response 00:30:58.035 response: 00:30:58.035 { 00:30:58.035 "code": -17, 00:30:58.035 "message": "File exists" 00:30:58.035 } 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:58.035 08:17:49 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:30:58.968 [2024-07-13 08:17:50.672946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:58.968 [2024-07-13 08:17:50.673013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196140 with addr=10.0.0.2, port=8010 00:30:58.968 [2024-07-13 08:17:50.673054] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:58.968 [2024-07-13 08:17:50.673069] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:58.968 [2024-07-13 08:17:50.673082] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:00.346 [2024-07-13 08:17:51.675304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:00.346 [2024-07-13 08:17:51.675344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2196140 with addr=10.0.0.2, port=8010 00:31:00.346 [2024-07-13 08:17:51.675366] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:00.346 [2024-07-13 08:17:51.675379] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:00.346 [2024-07-13 08:17:51.675391] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:01.282 [2024-07-13 08:17:52.677544] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:01.282 request: 00:31:01.282 { 00:31:01.282 "name": "nvme_second", 00:31:01.282 "trtype": "tcp", 00:31:01.282 "traddr": "10.0.0.2", 00:31:01.282 "adrfam": "ipv4", 00:31:01.282 "trsvcid": "8010", 00:31:01.282 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:01.282 "wait_for_attach": false, 00:31:01.282 "attach_timeout_ms": 3000, 00:31:01.282 "method": "bdev_nvme_start_discovery", 00:31:01.282 "req_id": 1 00:31:01.282 } 00:31:01.282 Got JSON-RPC error response 00:31:01.283 response: 00:31:01.283 { 00:31:01.283 "code": -110, 00:31:01.283 "message": "Connection timed out" 00:31:01.283 } 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2077748 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:01.283 rmmod nvme_tcp 00:31:01.283 rmmod nvme_fabrics 00:31:01.283 rmmod nvme_keyring 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2077612 ']' 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2077612 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 2077612 ']' 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 2077612 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2077612 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2077612' 00:31:01.283 killing process with pid 2077612 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 2077612 00:31:01.283 08:17:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 2077612 00:31:01.542 08:17:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:01.542 08:17:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:01.542 08:17:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:01.542 08:17:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:01.542 08:17:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:01.542 08:17:53 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:01.542 08:17:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:01.542 08:17:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.439 08:17:55 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:03.439 00:31:03.439 real 0m13.071s 00:31:03.439 user 0m19.058s 00:31:03.439 sys 0m2.664s 00:31:03.439 08:17:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:03.439 08:17:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:03.439 ************************************ 00:31:03.439 END TEST nvmf_host_discovery 00:31:03.439 ************************************ 00:31:03.439 08:17:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:03.439 08:17:55 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:03.439 08:17:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:03.439 08:17:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:03.439 08:17:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:03.439 ************************************ 00:31:03.439 START TEST nvmf_host_multipath_status 00:31:03.439 ************************************ 00:31:03.439 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:03.697 * Looking for test storage... 00:31:03.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:03.697 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:03.697 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:03.697 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:03.697 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:03.697 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:03.697 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:03.697 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:03.697 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:03.697 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:03.697 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:03.697 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:03.697 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:03.697 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:03.697 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:03.698 08:17:55 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:05.604 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:05.605 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:05.605 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:05.605 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:05.605 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:05.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:05.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:31:05.605 00:31:05.605 --- 10.0.0.2 ping statistics --- 00:31:05.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.605 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:05.605 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:05.605 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:31:05.605 00:31:05.605 --- 10.0.0.1 ping statistics --- 00:31:05.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:05.605 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2080775 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2080775 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2080775 ']' 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:05.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:05.605 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:05.864 [2024-07-13 08:17:57.350656] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:31:05.864 [2024-07-13 08:17:57.350750] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:05.864 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.864 [2024-07-13 08:17:57.416263] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:05.864 [2024-07-13 08:17:57.505211] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:05.864 [2024-07-13 08:17:57.505264] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:05.864 [2024-07-13 08:17:57.505293] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:05.864 [2024-07-13 08:17:57.505304] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:05.864 [2024-07-13 08:17:57.505314] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:05.864 [2024-07-13 08:17:57.505397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:05.864 [2024-07-13 08:17:57.505402] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.122 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:06.122 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:06.122 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:06.122 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:06.122 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:06.122 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:06.122 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2080775 00:31:06.122 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:06.381 [2024-07-13 08:17:57.911694] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:06.381 08:17:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:06.640 Malloc0 00:31:06.640 08:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:06.899 08:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:07.158 08:17:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:07.416 [2024-07-13 08:17:59.007257] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:07.416 08:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:07.676 [2024-07-13 08:17:59.251924] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:07.677 08:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2081011 00:31:07.677 08:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:07.677 08:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:07.677 08:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2081011 /var/tmp/bdevperf.sock 00:31:07.677 08:17:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 2081011 ']' 00:31:07.677 08:17:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:07.677 08:17:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:07.677 08:17:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:07.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:07.677 08:17:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:07.677 08:17:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:07.935 08:17:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:07.935 08:17:59 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:07.935 08:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:08.193 08:17:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:08.774 Nvme0n1 00:31:08.774 08:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:09.341 Nvme0n1 00:31:09.341 08:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:09.341 08:18:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:11.247 08:18:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:11.247 08:18:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:11.506 08:18:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:11.766 08:18:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:12.700 08:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:12.700 08:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:12.700 08:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.700 08:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:12.958 08:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:12.958 08:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:12.958 08:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:12.958 08:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:13.216 08:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:13.216 08:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:13.216 08:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:13.216 08:18:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:13.474 08:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:13.474 08:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:13.474 08:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:13.474 08:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:13.732 08:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:13.732 08:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:13.732 08:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:13.732 08:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:13.990 08:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:13.990 08:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:13.990 08:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:13.990 08:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:14.248 08:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:14.248 08:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:14.248 08:18:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:14.506 08:18:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:14.763 08:18:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:15.696 08:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:15.696 08:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:15.696 08:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.696 08:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:15.954 08:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:15.954 08:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:15.954 08:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:15.954 08:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:16.211 08:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:16.211 08:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:16.211 08:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.211 08:18:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:16.469 08:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:16.469 08:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:16.469 08:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.469 08:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:16.726 08:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:16.726 08:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:16.726 08:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.726 08:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:16.984 08:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:16.984 08:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:16.984 08:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:16.984 08:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:17.242 08:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:17.242 08:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:17.242 08:18:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:17.500 08:18:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:17.759 08:18:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:19.136 08:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:19.136 08:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:19.136 08:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.136 08:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:19.136 08:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:19.136 08:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:19.136 08:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.136 08:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:19.393 08:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:19.393 08:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:19.393 08:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.393 08:18:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:19.651 08:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:19.651 08:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:19.651 08:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.651 08:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:19.908 08:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:19.908 08:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:19.908 08:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:19.909 08:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:20.166 08:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:20.166 08:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:20.166 08:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:20.166 08:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:20.423 08:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:20.423 08:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:20.423 08:18:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:20.680 08:18:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:20.938 08:18:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:21.922 08:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:21.922 08:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:21.922 08:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:21.922 08:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:22.179 08:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.179 08:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:22.179 08:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.179 08:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:22.436 08:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:22.436 08:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:22.436 08:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.436 08:18:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:22.693 08:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.693 08:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:22.693 08:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.693 08:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:22.950 08:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.950 08:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:22.950 08:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:22.950 08:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:22.950 08:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:22.950 08:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:23.206 08:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:23.206 08:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:23.206 08:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:23.206 08:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:23.206 08:18:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:23.772 08:18:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:23.772 08:18:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:25.150 08:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:25.150 08:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:25.150 08:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.150 08:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:25.150 08:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:25.150 08:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:25.150 08:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.150 08:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:25.408 08:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:25.408 08:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:25.408 08:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.408 08:18:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:25.665 08:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.665 08:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:25.665 08:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.665 08:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:25.923 08:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:25.923 08:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:25.923 08:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:25.923 08:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:26.180 08:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:26.180 08:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:26.180 08:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:26.181 08:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:26.438 08:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:26.438 08:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:26.438 08:18:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:26.696 08:18:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:26.955 08:18:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:27.892 08:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:27.892 08:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:27.892 08:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:27.893 08:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:28.150 08:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:28.150 08:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:28.150 08:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.150 08:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:28.408 08:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.408 08:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:28.408 08:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.408 08:18:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:28.667 08:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.667 08:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:28.667 08:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.667 08:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:28.925 08:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:28.925 08:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:28.925 08:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:28.925 08:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:29.184 08:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:29.184 08:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:29.184 08:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:29.184 08:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:29.442 08:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:29.442 08:18:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:29.700 08:18:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:29.700 08:18:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:29.959 08:18:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:30.219 08:18:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:31.183 08:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:31.183 08:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:31.183 08:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.183 08:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:31.440 08:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.440 08:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:31.440 08:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.440 08:18:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:31.697 08:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.697 08:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:31.697 08:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.697 08:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:31.954 08:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:31.954 08:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:31.954 08:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:31.954 08:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:32.213 08:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.213 08:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:32.213 08:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.213 08:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:32.471 08:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.471 08:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:32.471 08:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:32.471 08:18:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:32.729 08:18:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:32.729 08:18:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:32.729 08:18:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:32.986 08:18:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:33.244 08:18:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:34.180 08:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:34.180 08:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:34.180 08:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.180 08:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:34.438 08:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:34.438 08:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:34.438 08:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.438 08:18:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:34.696 08:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.696 08:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:34.696 08:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.696 08:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:34.954 08:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:34.954 08:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:34.954 08:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:34.954 08:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:35.212 08:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.212 08:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:35.212 08:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.212 08:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:35.470 08:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.470 08:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:35.470 08:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:35.470 08:18:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:35.728 08:18:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:35.728 08:18:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:31:35.728 08:18:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:35.987 08:18:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:36.246 08:18:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:31:37.183 08:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:31:37.183 08:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:37.183 08:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.183 08:18:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:37.441 08:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.441 08:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:37.441 08:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.441 08:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:37.699 08:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.699 08:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:37.699 08:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.699 08:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:37.957 08:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.957 08:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:37.957 08:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.957 08:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:38.215 08:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.215 08:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:38.215 08:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.215 08:18:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:38.473 08:18:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.473 08:18:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:38.473 08:18:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.473 08:18:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:38.732 08:18:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.732 08:18:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:31:38.732 08:18:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:38.989 08:18:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:39.247 08:18:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:31:40.184 08:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:31:40.184 08:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:40.184 08:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.184 08:18:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:40.453 08:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:40.453 08:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:40.453 08:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.453 08:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:40.713 08:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:40.713 08:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:40.713 08:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.713 08:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:40.969 08:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:40.969 08:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:40.969 08:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.969 08:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:41.226 08:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.226 08:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:41.226 08:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.226 08:18:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:41.483 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.483 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:41.483 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.483 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:41.742 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:41.742 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2081011 00:31:41.742 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2081011 ']' 00:31:41.742 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2081011 00:31:41.742 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:31:41.742 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:41.742 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2081011 00:31:41.742 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:31:41.742 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:31:41.742 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2081011' 00:31:41.742 killing process with pid 2081011 00:31:41.742 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2081011 00:31:41.742 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2081011 00:31:41.742 Connection closed with partial response: 00:31:41.742 00:31:41.742 00:31:42.010 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2081011 00:31:42.010 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:42.010 [2024-07-13 08:17:59.315329] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:31:42.010 [2024-07-13 08:17:59.315447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2081011 ] 00:31:42.010 EAL: No free 2048 kB hugepages reported on node 1 00:31:42.011 [2024-07-13 08:17:59.376352] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:42.011 [2024-07-13 08:17:59.468296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:42.011 Running I/O for 90 seconds... 00:31:42.011 [2024-07-13 08:18:15.178053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.011 [2024-07-13 08:18:15.178112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.178203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.011 [2024-07-13 08:18:15.178225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.178250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.011 [2024-07-13 08:18:15.178281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.178304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:44840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.011 [2024-07-13 08:18:15.178335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.178359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.011 [2024-07-13 08:18:15.178376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.178399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.011 [2024-07-13 08:18:15.178415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.178438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.011 [2024-07-13 08:18:15.178454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.178478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.011 [2024-07-13 08:18:15.178495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.178666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.011 [2024-07-13 08:18:15.178709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.178740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:44888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.011 [2024-07-13 08:18:15.178758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.178783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.011 [2024-07-13 08:18:15.178809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.178835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.011 [2024-07-13 08:18:15.178875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.178903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.011 [2024-07-13 08:18:15.178935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.178960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.011 [2024-07-13 08:18:15.178977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.179015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.011 [2024-07-13 08:18:15.179032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.179055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.011 [2024-07-13 08:18:15.179072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.179644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.011 [2024-07-13 08:18:15.179667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.179696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.011 [2024-07-13 08:18:15.179714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.179738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.011 [2024-07-13 08:18:15.179755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.179779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.011 [2024-07-13 08:18:15.179796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.179819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:44024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.011 [2024-07-13 08:18:15.179836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.179884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.011 [2024-07-13 08:18:15.179935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.179961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:44040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.011 [2024-07-13 08:18:15.179978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.180022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:44048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.011 [2024-07-13 08:18:15.180039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.180062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:44056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.011 [2024-07-13 08:18:15.180078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.180100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.011 [2024-07-13 08:18:15.180116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.180139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.011 [2024-07-13 08:18:15.180155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.180178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.011 [2024-07-13 08:18:15.180195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.180233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.011 [2024-07-13 08:18:15.180249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.180271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:44080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.011 [2024-07-13 08:18:15.180287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.180309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:44088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.011 [2024-07-13 08:18:15.180325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.180347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:44096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.011 [2024-07-13 08:18:15.180363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.180386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:44104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.011 [2024-07-13 08:18:15.180402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.180474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:44112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.011 [2024-07-13 08:18:15.180509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.180537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:44120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.011 [2024-07-13 08:18:15.180555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.180584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:44128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.011 [2024-07-13 08:18:15.180601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.180625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.011 [2024-07-13 08:18:15.180641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.180665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.011 [2024-07-13 08:18:15.180682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.180706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:44152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.011 [2024-07-13 08:18:15.180722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.180747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.011 [2024-07-13 08:18:15.180763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:42.011 [2024-07-13 08:18:15.180800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.180817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.180840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.180879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.180906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:44184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.180929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.180953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.180969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.180993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:44200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.181009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.181033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.181050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.181074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:44216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.181090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.181119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.181136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.181175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:44232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.181192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.181215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.181231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.181254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:44248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.181269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.181308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.181325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.181350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:44264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.181366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.181391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.181408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.181432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:44280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.181452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.181477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:44288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.181509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.181536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.181554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.181579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.181597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.181623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.012 [2024-07-13 08:18:15.181641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.181667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.012 [2024-07-13 08:18:15.181689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.181715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.012 [2024-07-13 08:18:15.181733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.181849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.012 [2024-07-13 08:18:15.181879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.181912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.012 [2024-07-13 08:18:15.181930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.181958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.012 [2024-07-13 08:18:15.181975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.182002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:44312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.182019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.182046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:44320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.182063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.182090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.182108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.182135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.182152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.182194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:44344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.182210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.182237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.182254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.182280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.182297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.182323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:44368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.182343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.182371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.182387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.182414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.182430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.182457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.182473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.182499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.182516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.182542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.182558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.182584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:44416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.182601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.182626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.182642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.182668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.012 [2024-07-13 08:18:15.182684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.182710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.012 [2024-07-13 08:18:15.182727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:42.012 [2024-07-13 08:18:15.182752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:44440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.182768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.182794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:44448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.182811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.182837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.182878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.182914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.182932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.182960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.182977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:44504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:44512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:44536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:44544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:44568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:44576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.183964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.183991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.184012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.184040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.184057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.184083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.184101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.184128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.184159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.184186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.184203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.184229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.184246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.184272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:44704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.184288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.184314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:44712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.184330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.184356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.184372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.184397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.184413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.184438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:44736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.184455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.184481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:44744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.184497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.184522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.184541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.184568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:44760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.013 [2024-07-13 08:18:15.184584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:42.013 [2024-07-13 08:18:15.184610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:15.184626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:15.184651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:15.184667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:15.184693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:15.184708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:15.184733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:15.184749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:15.184775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:15.184791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:15.184816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:15.184832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.757822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:95848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.757912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.757960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.757979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.758004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.014 [2024-07-13 08:18:30.758022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.758045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.014 [2024-07-13 08:18:30.758062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.758085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.014 [2024-07-13 08:18:30.758101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.758138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.014 [2024-07-13 08:18:30.758178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.758202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.014 [2024-07-13 08:18:30.758234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.758258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.758276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.758299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.758318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.758341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.758359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.758382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.758398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.758421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.758437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.758460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.758476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.758499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.758516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.758539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.758555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.758578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.758594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.758616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.014 [2024-07-13 08:18:30.758633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.758662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.014 [2024-07-13 08:18:30.758679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.759270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:95912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.759295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.759337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.759356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.759378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.759394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.759415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.014 [2024-07-13 08:18:30.759432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.759453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.759468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.759493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.759510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.759531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.759548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.759569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.759585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.759607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.759624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.759646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.759662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.759736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.014 [2024-07-13 08:18:30.759756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.759785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.014 [2024-07-13 08:18:30.759803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.759825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.759842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.759890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.759910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.759950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.759967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.759991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.760008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.760031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.760049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:42.014 [2024-07-13 08:18:30.760072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.014 [2024-07-13 08:18:30.760142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.760174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.015 [2024-07-13 08:18:30.760192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.760215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.015 [2024-07-13 08:18:30.760232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.760255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.015 [2024-07-13 08:18:30.760272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.760296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.015 [2024-07-13 08:18:30.760314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.760336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.015 [2024-07-13 08:18:30.760353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.760376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.015 [2024-07-13 08:18:30.760398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.760422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.015 [2024-07-13 08:18:30.760439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.760462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.015 [2024-07-13 08:18:30.760479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.760502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.015 [2024-07-13 08:18:30.760518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.760541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.015 [2024-07-13 08:18:30.760557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.760580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.015 [2024-07-13 08:18:30.760597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.760620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.015 [2024-07-13 08:18:30.760637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.760661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.015 [2024-07-13 08:18:30.760678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.761700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.015 [2024-07-13 08:18:30.761724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.761801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.015 [2024-07-13 08:18:30.761823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.761860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.015 [2024-07-13 08:18:30.761892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.761937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.015 [2024-07-13 08:18:30.761956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.761979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.015 [2024-07-13 08:18:30.762001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.762025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.015 [2024-07-13 08:18:30.762043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.762065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.015 [2024-07-13 08:18:30.762082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.762105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.015 [2024-07-13 08:18:30.762122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.762144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.015 [2024-07-13 08:18:30.762161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.762183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.015 [2024-07-13 08:18:30.762200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.762223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.015 [2024-07-13 08:18:30.762240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.762263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.015 [2024-07-13 08:18:30.762280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.762302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.015 [2024-07-13 08:18:30.762318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.762341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.015 [2024-07-13 08:18:30.762358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.762380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.015 [2024-07-13 08:18:30.762412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.762435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.015 [2024-07-13 08:18:30.762451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.762477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.015 [2024-07-13 08:18:30.762494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.762521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.015 [2024-07-13 08:18:30.762538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.762575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.015 [2024-07-13 08:18:30.762591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.762611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.015 [2024-07-13 08:18:30.762627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:42.015 [2024-07-13 08:18:30.762648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.762664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.762685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.762701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.762722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.762738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.762759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.016 [2024-07-13 08:18:30.762775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.763397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.763438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.763466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.763483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.763506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.763522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.763544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.016 [2024-07-13 08:18:30.763574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.763597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.763612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.763638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.763655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.763676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.763747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.763770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.016 [2024-07-13 08:18:30.763787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.763811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.763828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.763850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.763872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.763914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.763932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.763955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.763972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.764010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.764081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.764113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.764131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.764154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.016 [2024-07-13 08:18:30.764172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.764195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.016 [2024-07-13 08:18:30.764212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.764235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.764252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.764723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.764752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.764782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.764801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.764824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.016 [2024-07-13 08:18:30.764843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.764875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.016 [2024-07-13 08:18:30.764895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.764920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:96936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.016 [2024-07-13 08:18:30.764937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.765037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.016 [2024-07-13 08:18:30.765056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.765078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:96968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.016 [2024-07-13 08:18:30.765095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.765118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.016 [2024-07-13 08:18:30.765134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.765155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.765187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.765210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.765226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.765246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.016 [2024-07-13 08:18:30.765278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.765300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.016 [2024-07-13 08:18:30.765317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.765356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.765378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.765401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.016 [2024-07-13 08:18:30.765418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.765441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:95880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.765457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.765480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.016 [2024-07-13 08:18:30.765496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.765519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.765535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.765558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.765574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.765597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.016 [2024-07-13 08:18:30.765614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.767254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.767277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:42.016 [2024-07-13 08:18:30.767304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.016 [2024-07-13 08:18:30.767322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.767345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.767361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.767381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.017 [2024-07-13 08:18:30.767397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.767419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.767435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.767456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.017 [2024-07-13 08:18:30.767472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.767499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.017 [2024-07-13 08:18:30.767515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.767537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.017 [2024-07-13 08:18:30.767553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.767574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.767589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.767610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.017 [2024-07-13 08:18:30.767626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.767648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.017 [2024-07-13 08:18:30.767663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.767684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.017 [2024-07-13 08:18:30.767700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.767721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.767736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.767757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.767773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.767795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.767810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.767832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.017 [2024-07-13 08:18:30.767848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.767891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.767910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.767934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.767951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.767977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.767994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.768016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.017 [2024-07-13 08:18:30.768032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.769811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.769837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.769875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.769902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.769929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:97008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.769948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.769971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.769988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.770011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.770028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.770052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.770068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.770091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.770108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.770132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.770149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.770186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.770274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.770301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.770318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.770341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.770363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.770385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.770402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.770424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.017 [2024-07-13 08:18:30.770455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.770478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.017 [2024-07-13 08:18:30.770496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.770519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.017 [2024-07-13 08:18:30.770537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.770560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.017 [2024-07-13 08:18:30.770577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.770604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.017 [2024-07-13 08:18:30.770621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.770643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.017 [2024-07-13 08:18:30.770661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.770683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.017 [2024-07-13 08:18:30.770701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.770723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.017 [2024-07-13 08:18:30.770740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.770762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:96952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.770779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.770801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.017 [2024-07-13 08:18:30.770817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.770839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.017 [2024-07-13 08:18:30.770860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:42.017 [2024-07-13 08:18:30.770894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.017 [2024-07-13 08:18:30.770912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.772229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.018 [2024-07-13 08:18:30.772254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.772282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.018 [2024-07-13 08:18:30.772301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.772324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.018 [2024-07-13 08:18:30.772341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.772363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.018 [2024-07-13 08:18:30.772381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.772405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.772422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.772444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.772461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.772483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.772515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.772539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.772555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.772582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.772599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.772636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.772651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.772672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.772692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.772715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.772730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.772752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.772767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.772788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.772803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.772824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.772840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.772887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.772905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.772944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.772960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.772983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.018 [2024-07-13 08:18:30.773000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.773023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.018 [2024-07-13 08:18:30.773039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.773062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.018 [2024-07-13 08:18:30.773079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.773102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.018 [2024-07-13 08:18:30.773118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.773141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.018 [2024-07-13 08:18:30.773172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.773195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.018 [2024-07-13 08:18:30.773212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.774776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.018 [2024-07-13 08:18:30.774864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.774908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.774944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.774972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.774991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.775014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.775032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.775055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.775074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.775097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.775114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.775138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.775156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.775179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.775197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.775220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.775237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.775261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.018 [2024-07-13 08:18:30.775278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.775302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.018 [2024-07-13 08:18:30.775319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.775342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.775360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.775390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.775409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.775433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.775451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.775476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:96992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.775494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.775518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.775535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.775559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.018 [2024-07-13 08:18:30.775576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.775599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.018 [2024-07-13 08:18:30.775617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:42.018 [2024-07-13 08:18:30.775641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.019 [2024-07-13 08:18:30.775658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.775682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.019 [2024-07-13 08:18:30.775699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.777341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.019 [2024-07-13 08:18:30.777366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.777395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.019 [2024-07-13 08:18:30.777413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.777436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.019 [2024-07-13 08:18:30.777454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.777493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.019 [2024-07-13 08:18:30.777510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.777547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.019 [2024-07-13 08:18:30.777568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.777592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.019 [2024-07-13 08:18:30.777623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.777645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.019 [2024-07-13 08:18:30.777660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.777680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.019 [2024-07-13 08:18:30.777695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.777716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.019 [2024-07-13 08:18:30.777731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.777752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.019 [2024-07-13 08:18:30.777768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.777789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.019 [2024-07-13 08:18:30.777805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.777825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.019 [2024-07-13 08:18:30.777841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.777892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.019 [2024-07-13 08:18:30.777929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.777954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.019 [2024-07-13 08:18:30.777972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.777995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.019 [2024-07-13 08:18:30.778012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.778035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.019 [2024-07-13 08:18:30.778053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.778075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.019 [2024-07-13 08:18:30.778096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.778120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.019 [2024-07-13 08:18:30.778137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.778175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.019 [2024-07-13 08:18:30.778191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.778215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.019 [2024-07-13 08:18:30.778246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.780662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.019 [2024-07-13 08:18:30.780687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.780728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.019 [2024-07-13 08:18:30.780747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.780770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.019 [2024-07-13 08:18:30.780801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.780825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.019 [2024-07-13 08:18:30.780857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.780891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.019 [2024-07-13 08:18:30.780919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.780942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.019 [2024-07-13 08:18:30.780958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.780982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.019 [2024-07-13 08:18:30.780998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.781020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.019 [2024-07-13 08:18:30.781037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.781060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.019 [2024-07-13 08:18:30.781078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.781107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.019 [2024-07-13 08:18:30.781125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.781149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.019 [2024-07-13 08:18:30.781174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.781197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.019 [2024-07-13 08:18:30.781230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.781254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.019 [2024-07-13 08:18:30.781271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:42.019 [2024-07-13 08:18:30.781308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.781325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.781346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.781361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.781382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.781397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.781417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.781433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.781455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.781470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.781491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.020 [2024-07-13 08:18:30.781506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.781526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.020 [2024-07-13 08:18:30.781541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.781562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.020 [2024-07-13 08:18:30.781577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.781602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.020 [2024-07-13 08:18:30.781618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.781640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.020 [2024-07-13 08:18:30.781655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.781677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.020 [2024-07-13 08:18:30.781695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.781716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.020 [2024-07-13 08:18:30.781732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.781754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.781770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.781791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.781807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.781828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.781844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.781891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.781934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.781958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.781975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.781999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.782017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.782040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.020 [2024-07-13 08:18:30.782057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.782079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.020 [2024-07-13 08:18:30.782097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.782120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.782141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.782180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.782196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.782232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.020 [2024-07-13 08:18:30.782249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.782271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.020 [2024-07-13 08:18:30.782286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.784504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.784531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.784561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.784580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.784604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.784621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.784644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.784662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.784685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.020 [2024-07-13 08:18:30.784702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.784725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.020 [2024-07-13 08:18:30.784742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.784781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.784798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.784833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.784850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.784877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.784923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.784949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.784966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.784990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.785007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.785030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.785047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.785069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.785086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.785108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.785124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.785147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.020 [2024-07-13 08:18:30.785163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.785185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.020 [2024-07-13 08:18:30.785202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.785225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.020 [2024-07-13 08:18:30.785242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.785264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.020 [2024-07-13 08:18:30.785281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:42.020 [2024-07-13 08:18:30.785304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.021 [2024-07-13 08:18:30.785321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.785343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.785374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.785397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.785414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.785455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.785472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.785508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.785525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.785548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.021 [2024-07-13 08:18:30.785565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.785587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.021 [2024-07-13 08:18:30.785604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.785642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.021 [2024-07-13 08:18:30.785659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.785683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.785700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.785722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.785739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.785761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.785779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.785801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.021 [2024-07-13 08:18:30.785818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.785841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.785858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.785889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.021 [2024-07-13 08:18:30.785918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.785941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.021 [2024-07-13 08:18:30.785958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.785985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.021 [2024-07-13 08:18:30.786003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.786026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.021 [2024-07-13 08:18:30.786043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.786065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.021 [2024-07-13 08:18:30.786082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.786105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.021 [2024-07-13 08:18:30.786122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.786160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.021 [2024-07-13 08:18:30.786182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.786204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.786234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.786256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.786271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.788979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.789007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.789036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.789055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.789078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.789096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.789119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.021 [2024-07-13 08:18:30.789136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.789173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.021 [2024-07-13 08:18:30.789190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.789230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.789247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.789325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.789342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.789363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.789379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.789400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.789415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.789482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.789501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.789524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.021 [2024-07-13 08:18:30.789540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.789562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.789577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.789599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.789615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.789652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.789669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.789691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.789707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.789729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.021 [2024-07-13 08:18:30.789746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.789784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.021 [2024-07-13 08:18:30.789801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.789824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.789845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.789876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.021 [2024-07-13 08:18:30.789895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:42.021 [2024-07-13 08:18:30.789923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.021 [2024-07-13 08:18:30.789940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.789963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-13 08:18:30.789980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.790003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.022 [2024-07-13 08:18:30.790020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.790043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:97600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-13 08:18:30.790060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.790083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-13 08:18:30.790100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.790123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-13 08:18:30.790140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.790163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-13 08:18:30.790180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.790202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-13 08:18:30.790235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.790257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.022 [2024-07-13 08:18:30.790289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.790311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.022 [2024-07-13 08:18:30.790326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.790347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.022 [2024-07-13 08:18:30.790366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.790387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.022 [2024-07-13 08:18:30.790403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.792118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.022 [2024-07-13 08:18:30.792143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.792179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-13 08:18:30.792198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.792221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-13 08:18:30.792254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.792277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-13 08:18:30.792294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.792316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-13 08:18:30.792333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.792371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-13 08:18:30.792387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.792409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-13 08:18:30.792440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.792462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.022 [2024-07-13 08:18:30.792477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.792498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-13 08:18:30.792513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.792534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-13 08:18:30.792549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.792570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:97960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-13 08:18:30.792586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.792612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-13 08:18:30.792628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.792649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.022 [2024-07-13 08:18:30.792665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.792687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.022 [2024-07-13 08:18:30.792703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.792723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.022 [2024-07-13 08:18:30.792738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.792760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.022 [2024-07-13 08:18:30.792776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.792796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.022 [2024-07-13 08:18:30.792812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.792832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-13 08:18:30.792862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.792895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.022 [2024-07-13 08:18:30.792934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.792958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.022 [2024-07-13 08:18:30.792975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.792997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.022 [2024-07-13 08:18:30.793014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.793035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.022 [2024-07-13 08:18:30.793052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.793074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.022 [2024-07-13 08:18:30.793089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.793116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-13 08:18:30.793133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.793170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.022 [2024-07-13 08:18:30.793186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.793207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-13 08:18:30.793238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.793260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.022 [2024-07-13 08:18:30.793275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.793296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-13 08:18:30.793311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.793332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.022 [2024-07-13 08:18:30.793347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.793368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.022 [2024-07-13 08:18:30.793383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:42.022 [2024-07-13 08:18:30.793404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.022 [2024-07-13 08:18:30.793419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.795864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.023 [2024-07-13 08:18:30.795898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.795942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.023 [2024-07-13 08:18:30.795961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.795984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.023 [2024-07-13 08:18:30.796001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.023 [2024-07-13 08:18:30.796055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.023 [2024-07-13 08:18:30.796101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.023 [2024-07-13 08:18:30.796142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.023 [2024-07-13 08:18:30.796187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.023 [2024-07-13 08:18:30.796226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.023 [2024-07-13 08:18:30.796266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.023 [2024-07-13 08:18:30.796305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.023 [2024-07-13 08:18:30.796345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.023 [2024-07-13 08:18:30.796384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.023 [2024-07-13 08:18:30.796423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.023 [2024-07-13 08:18:30.796477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.023 [2024-07-13 08:18:30.796516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.023 [2024-07-13 08:18:30.796568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.023 [2024-07-13 08:18:30.796612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.023 [2024-07-13 08:18:30.796649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.023 [2024-07-13 08:18:30.796685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.023 [2024-07-13 08:18:30.796720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.023 [2024-07-13 08:18:30.796756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.023 [2024-07-13 08:18:30.796793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.023 [2024-07-13 08:18:30.796829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.023 [2024-07-13 08:18:30.796893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.023 [2024-07-13 08:18:30.796950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.796973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.023 [2024-07-13 08:18:30.796989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.797012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.023 [2024-07-13 08:18:30.797030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.797053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.023 [2024-07-13 08:18:30.797070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.797092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.023 [2024-07-13 08:18:30.797109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.797136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.023 [2024-07-13 08:18:30.797168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.797194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.023 [2024-07-13 08:18:30.797210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.797246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.023 [2024-07-13 08:18:30.797262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.797283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.023 [2024-07-13 08:18:30.797298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.797319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.023 [2024-07-13 08:18:30.797334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.797355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.023 [2024-07-13 08:18:30.797371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:31:42.023 [2024-07-13 08:18:30.799630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:97736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.024 [2024-07-13 08:18:30.799670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.799699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.799731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.799755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.799771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.799793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.799826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.799850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.799874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.799899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.799917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.799945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.799963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.799987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.800004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.800043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.800083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.800122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.024 [2024-07-13 08:18:30.800187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.024 [2024-07-13 08:18:30.800239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.024 [2024-07-13 08:18:30.800275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.024 [2024-07-13 08:18:30.800311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.024 [2024-07-13 08:18:30.800347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.024 [2024-07-13 08:18:30.800383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.024 [2024-07-13 08:18:30.800434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.024 [2024-07-13 08:18:30.800477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.800533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.800573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.800612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.800652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.800691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.024 [2024-07-13 08:18:30.800730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.800769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.800809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.800849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.024 [2024-07-13 08:18:30.800908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.800952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.800975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.024 [2024-07-13 08:18:30.800997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.801021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.801038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.801060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.024 [2024-07-13 08:18:30.801077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.801116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.024 [2024-07-13 08:18:30.801134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.801156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.801187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.801208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.801224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.801244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.801260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.801280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.024 [2024-07-13 08:18:30.801296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.803902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.024 [2024-07-13 08:18:30.803940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.803969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.024 [2024-07-13 08:18:30.803988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.804012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.024 [2024-07-13 08:18:30.804030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:31:42.024 [2024-07-13 08:18:30.804054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.024 [2024-07-13 08:18:30.804071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.804094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:98400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.804116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.804141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.804160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.804198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:98432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.804215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.804308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.804327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.804349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.025 [2024-07-13 08:18:30.804365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.804386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.025 [2024-07-13 08:18:30.804402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.804423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.025 [2024-07-13 08:18:30.804439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.804460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.025 [2024-07-13 08:18:30.804475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.804496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.025 [2024-07-13 08:18:30.804511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.804531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.804547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.804568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.804599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.804622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.804638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.804676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:98264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.804694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.804724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.804742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.804765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.025 [2024-07-13 08:18:30.804783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.804806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.025 [2024-07-13 08:18:30.804822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.804845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.025 [2024-07-13 08:18:30.804862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.804907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.025 [2024-07-13 08:18:30.804929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.804959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.804978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.805001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.805019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.805041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.805058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.805082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.805099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.805122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.805139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.805162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.805179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.805202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.805219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.805247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.025 [2024-07-13 08:18:30.805265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.805303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.805320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.805343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.025 [2024-07-13 08:18:30.805375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.805397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:97840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.025 [2024-07-13 08:18:30.805413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.805434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.025 [2024-07-13 08:18:30.805450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.805485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.805501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.805522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.805538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.805560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.025 [2024-07-13 08:18:30.805575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.807332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:98496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.807358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.807387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:98512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.807405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.807429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:98528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.807446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.807469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.807486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.807509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:98560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.807531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.807555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.807572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.807597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.025 [2024-07-13 08:18:30.807614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.807637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.025 [2024-07-13 08:18:30.807655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:31:42.025 [2024-07-13 08:18:30.807677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.026 [2024-07-13 08:18:30.807694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.807716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.026 [2024-07-13 08:18:30.807733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.807756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.026 [2024-07-13 08:18:30.807773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.807810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.026 [2024-07-13 08:18:30.807828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.807851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.026 [2024-07-13 08:18:30.807894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.807934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:98088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.026 [2024-07-13 08:18:30.807967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.807991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.026 [2024-07-13 08:18:30.808008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.808030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.026 [2024-07-13 08:18:30.808047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.808069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:98384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.026 [2024-07-13 08:18:30.808090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.808113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.026 [2024-07-13 08:18:30.808129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.808166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:98448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.026 [2024-07-13 08:18:30.808183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.808219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.026 [2024-07-13 08:18:30.808234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.808256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.026 [2024-07-13 08:18:30.808271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.808292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.026 [2024-07-13 08:18:30.808307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.808327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:98232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.026 [2024-07-13 08:18:30.808342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.808363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:98296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.026 [2024-07-13 08:18:30.808378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.808399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.026 [2024-07-13 08:18:30.808414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.808435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.026 [2024-07-13 08:18:30.808450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.808473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:98072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.026 [2024-07-13 08:18:30.808489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.808511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.026 [2024-07-13 08:18:30.808527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.808548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.026 [2024-07-13 08:18:30.808563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.808588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.026 [2024-07-13 08:18:30.808604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.808625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.026 [2024-07-13 08:18:30.808640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.808661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.026 [2024-07-13 08:18:30.808677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.808699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:98480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.026 [2024-07-13 08:18:30.808715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.811402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.026 [2024-07-13 08:18:30.811430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.811488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.026 [2024-07-13 08:18:30.811508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.811532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.026 [2024-07-13 08:18:30.811547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.811570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:98600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.026 [2024-07-13 08:18:30.811586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.811607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:98616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.026 [2024-07-13 08:18:30.811623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.811660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.026 [2024-07-13 08:18:30.811678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.811700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:98648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.026 [2024-07-13 08:18:30.811717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.811757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.026 [2024-07-13 08:18:30.811775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.811803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.026 [2024-07-13 08:18:30.811822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.811845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.026 [2024-07-13 08:18:30.811862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.811895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.026 [2024-07-13 08:18:30.811914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.811937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:42.026 [2024-07-13 08:18:30.811954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.811976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.026 [2024-07-13 08:18:30.811993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.812016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.026 [2024-07-13 08:18:30.812032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:31:42.026 [2024-07-13 08:18:30.812056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:42.026 [2024-07-13 08:18:30.812073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:31:42.026 Received shutdown signal, test time was about 32.358204 seconds 00:31:42.026 00:31:42.026 Latency(us) 00:31:42.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.026 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:42.026 Verification LBA range: start 0x0 length 0x4000 00:31:42.026 Nvme0n1 : 32.36 7631.49 29.81 0.00 0.00 16742.44 849.54 4026531.84 00:31:42.026 =================================================================================================================== 00:31:42.027 Total : 7631.49 29.81 0.00 0.00 16742.44 849.54 4026531.84 00:31:42.027 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:42.286 rmmod nvme_tcp 00:31:42.286 rmmod nvme_fabrics 00:31:42.286 rmmod nvme_keyring 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2080775 ']' 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2080775 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 2080775 ']' 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 2080775 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2080775 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2080775' 00:31:42.286 killing process with pid 2080775 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 2080775 00:31:42.286 08:18:33 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 2080775 00:31:42.545 08:18:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:42.545 08:18:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:42.545 08:18:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:42.545 08:18:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:42.545 08:18:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:42.545 08:18:34 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:42.545 08:18:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:42.545 08:18:34 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:44.447 08:18:36 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:44.447 00:31:44.447 real 0m41.017s 00:31:44.447 user 1m58.525s 00:31:44.447 sys 0m12.384s 00:31:44.447 08:18:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:44.447 08:18:36 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:44.447 ************************************ 00:31:44.447 END TEST nvmf_host_multipath_status 00:31:44.447 ************************************ 00:31:44.704 08:18:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:44.705 08:18:36 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:44.705 08:18:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:44.705 08:18:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:44.705 08:18:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:44.705 ************************************ 00:31:44.705 START TEST nvmf_discovery_remove_ifc 00:31:44.705 ************************************ 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:31:44.705 * Looking for test storage... 00:31:44.705 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:31:44.705 08:18:36 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:46.633 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:46.633 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:46.633 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:46.633 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:46.633 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:46.634 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:46.634 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:31:46.634 00:31:46.634 --- 10.0.0.2 ping statistics --- 00:31:46.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.634 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:46.634 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:46.634 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:31:46.634 00:31:46.634 --- 10.0.0.1 ping statistics --- 00:31:46.634 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:46.634 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2087756 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2087756 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2087756 ']' 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:46.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:46.634 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:46.893 [2024-07-13 08:18:38.405708] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:31:46.893 [2024-07-13 08:18:38.405792] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:46.893 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.893 [2024-07-13 08:18:38.472375] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:46.893 [2024-07-13 08:18:38.563278] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:46.893 [2024-07-13 08:18:38.563335] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:46.893 [2024-07-13 08:18:38.563349] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:46.893 [2024-07-13 08:18:38.563361] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:46.893 [2024-07-13 08:18:38.563371] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:46.893 [2024-07-13 08:18:38.563404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:47.152 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:47.152 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:31:47.152 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:47.152 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:47.152 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:47.152 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:47.152 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:31:47.152 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.152 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:47.152 [2024-07-13 08:18:38.714590] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:47.153 [2024-07-13 08:18:38.722766] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:47.153 null0 00:31:47.153 [2024-07-13 08:18:38.754755] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:47.153 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.153 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2087799 00:31:47.153 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:31:47.153 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2087799 /tmp/host.sock 00:31:47.153 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 2087799 ']' 00:31:47.153 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:31:47.153 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:47.153 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:47.153 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:47.153 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:47.153 08:18:38 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:47.153 [2024-07-13 08:18:38.819079] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:31:47.153 [2024-07-13 08:18:38.819154] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2087799 ] 00:31:47.153 EAL: No free 2048 kB hugepages reported on node 1 00:31:47.153 [2024-07-13 08:18:38.880232] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:47.412 [2024-07-13 08:18:38.971408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:47.412 08:18:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:47.412 08:18:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:31:47.412 08:18:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:47.412 08:18:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:31:47.412 08:18:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.412 08:18:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:47.412 08:18:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.412 08:18:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:31:47.412 08:18:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.412 08:18:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:47.673 08:18:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.673 08:18:39 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:31:47.673 08:18:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.673 08:18:39 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:48.611 [2024-07-13 08:18:40.202605] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:48.611 [2024-07-13 08:18:40.202639] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:48.611 [2024-07-13 08:18:40.202665] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:48.611 [2024-07-13 08:18:40.291012] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:48.870 [2024-07-13 08:18:40.353487] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:48.870 [2024-07-13 08:18:40.353556] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:48.870 [2024-07-13 08:18:40.353598] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:48.870 [2024-07-13 08:18:40.353625] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:48.870 [2024-07-13 08:18:40.353659] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:48.870 [2024-07-13 08:18:40.360710] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x7ef300 was disconnected and freed. delete nvme_qpair. 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:48.870 08:18:40 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:49.808 08:18:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:49.808 08:18:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:49.808 08:18:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:49.808 08:18:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:49.808 08:18:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:49.808 08:18:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:49.808 08:18:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:49.808 08:18:41 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:49.808 08:18:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:49.808 08:18:41 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:51.190 08:18:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:51.190 08:18:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:51.190 08:18:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:51.190 08:18:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:51.190 08:18:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:51.190 08:18:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:51.190 08:18:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:51.190 08:18:42 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:51.190 08:18:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:51.190 08:18:42 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:52.128 08:18:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:52.128 08:18:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:52.128 08:18:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:52.128 08:18:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:52.128 08:18:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:52.128 08:18:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:52.128 08:18:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:52.128 08:18:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:52.128 08:18:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:52.128 08:18:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:53.069 08:18:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:53.069 08:18:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:53.069 08:18:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:53.069 08:18:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:53.069 08:18:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:53.069 08:18:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:53.069 08:18:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:53.069 08:18:44 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:53.069 08:18:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:53.069 08:18:44 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:54.008 08:18:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:54.008 08:18:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:54.008 08:18:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:54.008 08:18:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:54.008 08:18:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:54.008 08:18:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:54.008 08:18:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:54.008 08:18:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:54.008 08:18:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:54.008 08:18:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:54.266 [2024-07-13 08:18:45.795009] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:31:54.266 [2024-07-13 08:18:45.795078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.266 [2024-07-13 08:18:45.795101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.266 [2024-07-13 08:18:45.795120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.266 [2024-07-13 08:18:45.795139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.266 [2024-07-13 08:18:45.795153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.266 [2024-07-13 08:18:45.795166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.266 [2024-07-13 08:18:45.795179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.266 [2024-07-13 08:18:45.795216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.266 [2024-07-13 08:18:45.795233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:54.266 [2024-07-13 08:18:45.795249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:54.266 [2024-07-13 08:18:45.795264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5b40 is same with the state(5) to be set 00:31:54.266 [2024-07-13 08:18:45.805017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b5b40 (9): Bad file descriptor 00:31:54.266 [2024-07-13 08:18:45.815087] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:55.205 08:18:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:55.205 08:18:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:55.205 08:18:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:55.205 08:18:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:55.205 08:18:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:55.205 08:18:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:55.205 08:18:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:55.205 [2024-07-13 08:18:46.877928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:31:55.205 [2024-07-13 08:18:46.877991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b5b40 with addr=10.0.0.2, port=4420 00:31:55.205 [2024-07-13 08:18:46.878019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7b5b40 is same with the state(5) to be set 00:31:55.205 [2024-07-13 08:18:46.878068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b5b40 (9): Bad file descriptor 00:31:55.205 [2024-07-13 08:18:46.878544] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:31:55.205 [2024-07-13 08:18:46.878579] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:55.205 [2024-07-13 08:18:46.878598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:55.205 [2024-07-13 08:18:46.878617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:55.205 [2024-07-13 08:18:46.878650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:55.205 [2024-07-13 08:18:46.878672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:55.205 08:18:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:55.205 08:18:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:31:55.205 08:18:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:56.585 [2024-07-13 08:18:47.881169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:56.585 [2024-07-13 08:18:47.881197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:56.585 [2024-07-13 08:18:47.881228] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:56.585 [2024-07-13 08:18:47.881243] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:31:56.585 [2024-07-13 08:18:47.881265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:56.585 [2024-07-13 08:18:47.881311] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:31:56.585 [2024-07-13 08:18:47.881360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.585 [2024-07-13 08:18:47.881386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.585 [2024-07-13 08:18:47.881409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.585 [2024-07-13 08:18:47.881425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.585 [2024-07-13 08:18:47.881441] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.585 [2024-07-13 08:18:47.881458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.585 [2024-07-13 08:18:47.881475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.585 [2024-07-13 08:18:47.881491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.585 [2024-07-13 08:18:47.881507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:31:56.585 [2024-07-13 08:18:47.881522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:56.585 [2024-07-13 08:18:47.881537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:31:56.585 [2024-07-13 08:18:47.881693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7b4f80 (9): Bad file descriptor 00:31:56.585 [2024-07-13 08:18:47.882721] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:31:56.585 [2024-07-13 08:18:47.882747] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:31:56.585 08:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:56.585 08:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:56.585 08:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:56.585 08:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.585 08:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:56.585 08:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.585 08:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:56.585 08:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.585 08:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:31:56.585 08:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:56.585 08:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:56.585 08:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:31:56.585 08:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:56.585 08:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:56.585 08:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:56.585 08:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:56.585 08:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:56.585 08:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:56.585 08:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:56.585 08:18:47 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:56.585 08:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:56.585 08:18:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:57.525 08:18:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:57.525 08:18:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:57.525 08:18:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:57.525 08:18:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:57.525 08:18:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:57.525 08:18:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:57.525 08:18:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:57.525 08:18:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:57.525 08:18:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:57.525 08:18:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:58.464 [2024-07-13 08:18:49.939731] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:58.464 [2024-07-13 08:18:49.939771] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:58.464 [2024-07-13 08:18:49.939798] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:58.464 [2024-07-13 08:18:50.070212] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:31:58.464 08:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:58.464 08:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:58.464 08:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:58.464 08:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:58.464 08:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:58.464 08:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:58.464 08:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:58.464 08:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:58.464 08:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:31:58.464 08:18:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:31:58.725 [2024-07-13 08:18:50.250465] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:31:58.725 [2024-07-13 08:18:50.250514] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:31:58.725 [2024-07-13 08:18:50.250546] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:31:58.725 [2024-07-13 08:18:50.250568] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:31:58.725 [2024-07-13 08:18:50.250593] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:58.725 [2024-07-13 08:18:50.257108] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x7cd730 was disconnected and freed. delete nvme_qpair. 00:31:59.659 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:31:59.659 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:59.659 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.659 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:31:59.659 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:31:59.659 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:31:59.659 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:31:59.659 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.659 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:31:59.659 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:31:59.659 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2087799 00:31:59.659 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2087799 ']' 00:31:59.659 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2087799 00:31:59.659 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:31:59.659 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:59.659 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2087799 00:31:59.660 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:59.660 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:59.660 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2087799' 00:31:59.660 killing process with pid 2087799 00:31:59.660 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2087799 00:31:59.660 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2087799 00:31:59.660 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:31:59.660 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:59.660 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:31:59.660 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:59.660 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:31:59.660 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:59.660 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:59.660 rmmod nvme_tcp 00:31:59.920 rmmod nvme_fabrics 00:31:59.920 rmmod nvme_keyring 00:31:59.920 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:59.920 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:31:59.920 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:31:59.920 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2087756 ']' 00:31:59.920 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2087756 00:31:59.920 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 2087756 ']' 00:31:59.920 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 2087756 00:31:59.920 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:31:59.920 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:59.920 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2087756 00:31:59.920 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:59.920 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:59.920 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2087756' 00:31:59.920 killing process with pid 2087756 00:31:59.920 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 2087756 00:31:59.920 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 2087756 00:32:00.184 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:00.184 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:00.184 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:00.184 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:00.184 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:00.184 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.184 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:00.184 08:18:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.094 08:18:53 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:02.094 00:32:02.094 real 0m17.502s 00:32:02.094 user 0m25.420s 00:32:02.094 sys 0m2.976s 00:32:02.094 08:18:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:02.094 08:18:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:02.094 ************************************ 00:32:02.094 END TEST nvmf_discovery_remove_ifc 00:32:02.094 ************************************ 00:32:02.094 08:18:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:02.094 08:18:53 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:02.094 08:18:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:02.094 08:18:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:02.094 08:18:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:02.094 ************************************ 00:32:02.094 START TEST nvmf_identify_kernel_target 00:32:02.094 ************************************ 00:32:02.094 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:02.094 * Looking for test storage... 00:32:02.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:02.094 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:02.094 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:02.094 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:02.094 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:02.094 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:02.094 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:02.094 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:02.094 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:02.094 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:02.094 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:02.094 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:02.094 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:02.352 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:02.352 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:02.352 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:02.352 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:02.352 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:02.352 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:02.352 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:02.352 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:02.352 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:02.352 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:02.352 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.352 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.352 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.352 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:02.352 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:02.352 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:02.352 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:02.352 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:02.352 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:02.353 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:02.353 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:02.353 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:02.353 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:02.353 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:02.353 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:02.353 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:02.353 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:02.353 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:02.353 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:02.353 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:02.353 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:02.353 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:02.353 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:02.353 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:02.353 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:02.353 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:02.353 08:18:53 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:04.254 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:04.254 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:04.254 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:04.254 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:04.254 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:04.255 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:04.255 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:32:04.255 00:32:04.255 --- 10.0.0.2 ping statistics --- 00:32:04.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.255 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:04.255 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:04.255 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:32:04.255 00:32:04.255 --- 10.0.0.1 ping statistics --- 00:32:04.255 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:04.255 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:04.255 08:18:55 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:05.193 Waiting for block devices as requested 00:32:05.451 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:05.451 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:05.451 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:05.709 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:05.709 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:05.709 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:05.709 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:05.968 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:05.968 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:05.968 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:05.968 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:06.227 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:06.227 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:06.227 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:06.485 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:06.485 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:06.485 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:06.745 No valid GPT data, bailing 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:06.745 00:32:06.745 Discovery Log Number of Records 2, Generation counter 2 00:32:06.745 =====Discovery Log Entry 0====== 00:32:06.745 trtype: tcp 00:32:06.745 adrfam: ipv4 00:32:06.745 subtype: current discovery subsystem 00:32:06.745 treq: not specified, sq flow control disable supported 00:32:06.745 portid: 1 00:32:06.745 trsvcid: 4420 00:32:06.745 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:06.745 traddr: 10.0.0.1 00:32:06.745 eflags: none 00:32:06.745 sectype: none 00:32:06.745 =====Discovery Log Entry 1====== 00:32:06.745 trtype: tcp 00:32:06.745 adrfam: ipv4 00:32:06.745 subtype: nvme subsystem 00:32:06.745 treq: not specified, sq flow control disable supported 00:32:06.745 portid: 1 00:32:06.745 trsvcid: 4420 00:32:06.745 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:06.745 traddr: 10.0.0.1 00:32:06.745 eflags: none 00:32:06.745 sectype: none 00:32:06.745 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:06.745 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:06.745 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.007 ===================================================== 00:32:07.007 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:07.007 ===================================================== 00:32:07.007 Controller Capabilities/Features 00:32:07.007 ================================ 00:32:07.007 Vendor ID: 0000 00:32:07.007 Subsystem Vendor ID: 0000 00:32:07.007 Serial Number: 1923233b14f38bb07e8c 00:32:07.007 Model Number: Linux 00:32:07.007 Firmware Version: 6.7.0-68 00:32:07.007 Recommended Arb Burst: 0 00:32:07.007 IEEE OUI Identifier: 00 00 00 00:32:07.007 Multi-path I/O 00:32:07.007 May have multiple subsystem ports: No 00:32:07.007 May have multiple controllers: No 00:32:07.007 Associated with SR-IOV VF: No 00:32:07.007 Max Data Transfer Size: Unlimited 00:32:07.007 Max Number of Namespaces: 0 00:32:07.007 Max Number of I/O Queues: 1024 00:32:07.007 NVMe Specification Version (VS): 1.3 00:32:07.007 NVMe Specification Version (Identify): 1.3 00:32:07.007 Maximum Queue Entries: 1024 00:32:07.007 Contiguous Queues Required: No 00:32:07.007 Arbitration Mechanisms Supported 00:32:07.007 Weighted Round Robin: Not Supported 00:32:07.007 Vendor Specific: Not Supported 00:32:07.007 Reset Timeout: 7500 ms 00:32:07.007 Doorbell Stride: 4 bytes 00:32:07.007 NVM Subsystem Reset: Not Supported 00:32:07.007 Command Sets Supported 00:32:07.007 NVM Command Set: Supported 00:32:07.007 Boot Partition: Not Supported 00:32:07.007 Memory Page Size Minimum: 4096 bytes 00:32:07.007 Memory Page Size Maximum: 4096 bytes 00:32:07.007 Persistent Memory Region: Not Supported 00:32:07.007 Optional Asynchronous Events Supported 00:32:07.007 Namespace Attribute Notices: Not Supported 00:32:07.007 Firmware Activation Notices: Not Supported 00:32:07.007 ANA Change Notices: Not Supported 00:32:07.007 PLE Aggregate Log Change Notices: Not Supported 00:32:07.007 LBA Status Info Alert Notices: Not Supported 00:32:07.007 EGE Aggregate Log Change Notices: Not Supported 00:32:07.007 Normal NVM Subsystem Shutdown event: Not Supported 00:32:07.007 Zone Descriptor Change Notices: Not Supported 00:32:07.007 Discovery Log Change Notices: Supported 00:32:07.007 Controller Attributes 00:32:07.007 128-bit Host Identifier: Not Supported 00:32:07.007 Non-Operational Permissive Mode: Not Supported 00:32:07.007 NVM Sets: Not Supported 00:32:07.007 Read Recovery Levels: Not Supported 00:32:07.007 Endurance Groups: Not Supported 00:32:07.007 Predictable Latency Mode: Not Supported 00:32:07.007 Traffic Based Keep ALive: Not Supported 00:32:07.007 Namespace Granularity: Not Supported 00:32:07.007 SQ Associations: Not Supported 00:32:07.007 UUID List: Not Supported 00:32:07.007 Multi-Domain Subsystem: Not Supported 00:32:07.007 Fixed Capacity Management: Not Supported 00:32:07.007 Variable Capacity Management: Not Supported 00:32:07.007 Delete Endurance Group: Not Supported 00:32:07.007 Delete NVM Set: Not Supported 00:32:07.007 Extended LBA Formats Supported: Not Supported 00:32:07.007 Flexible Data Placement Supported: Not Supported 00:32:07.007 00:32:07.007 Controller Memory Buffer Support 00:32:07.007 ================================ 00:32:07.007 Supported: No 00:32:07.007 00:32:07.007 Persistent Memory Region Support 00:32:07.007 ================================ 00:32:07.007 Supported: No 00:32:07.007 00:32:07.007 Admin Command Set Attributes 00:32:07.007 ============================ 00:32:07.007 Security Send/Receive: Not Supported 00:32:07.007 Format NVM: Not Supported 00:32:07.007 Firmware Activate/Download: Not Supported 00:32:07.007 Namespace Management: Not Supported 00:32:07.007 Device Self-Test: Not Supported 00:32:07.007 Directives: Not Supported 00:32:07.007 NVMe-MI: Not Supported 00:32:07.007 Virtualization Management: Not Supported 00:32:07.007 Doorbell Buffer Config: Not Supported 00:32:07.007 Get LBA Status Capability: Not Supported 00:32:07.007 Command & Feature Lockdown Capability: Not Supported 00:32:07.007 Abort Command Limit: 1 00:32:07.007 Async Event Request Limit: 1 00:32:07.007 Number of Firmware Slots: N/A 00:32:07.007 Firmware Slot 1 Read-Only: N/A 00:32:07.007 Firmware Activation Without Reset: N/A 00:32:07.007 Multiple Update Detection Support: N/A 00:32:07.007 Firmware Update Granularity: No Information Provided 00:32:07.007 Per-Namespace SMART Log: No 00:32:07.007 Asymmetric Namespace Access Log Page: Not Supported 00:32:07.007 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:07.007 Command Effects Log Page: Not Supported 00:32:07.007 Get Log Page Extended Data: Supported 00:32:07.007 Telemetry Log Pages: Not Supported 00:32:07.007 Persistent Event Log Pages: Not Supported 00:32:07.007 Supported Log Pages Log Page: May Support 00:32:07.007 Commands Supported & Effects Log Page: Not Supported 00:32:07.007 Feature Identifiers & Effects Log Page:May Support 00:32:07.007 NVMe-MI Commands & Effects Log Page: May Support 00:32:07.007 Data Area 4 for Telemetry Log: Not Supported 00:32:07.007 Error Log Page Entries Supported: 1 00:32:07.007 Keep Alive: Not Supported 00:32:07.007 00:32:07.007 NVM Command Set Attributes 00:32:07.007 ========================== 00:32:07.007 Submission Queue Entry Size 00:32:07.007 Max: 1 00:32:07.007 Min: 1 00:32:07.007 Completion Queue Entry Size 00:32:07.007 Max: 1 00:32:07.007 Min: 1 00:32:07.007 Number of Namespaces: 0 00:32:07.007 Compare Command: Not Supported 00:32:07.007 Write Uncorrectable Command: Not Supported 00:32:07.007 Dataset Management Command: Not Supported 00:32:07.007 Write Zeroes Command: Not Supported 00:32:07.007 Set Features Save Field: Not Supported 00:32:07.007 Reservations: Not Supported 00:32:07.007 Timestamp: Not Supported 00:32:07.007 Copy: Not Supported 00:32:07.007 Volatile Write Cache: Not Present 00:32:07.007 Atomic Write Unit (Normal): 1 00:32:07.007 Atomic Write Unit (PFail): 1 00:32:07.007 Atomic Compare & Write Unit: 1 00:32:07.007 Fused Compare & Write: Not Supported 00:32:07.007 Scatter-Gather List 00:32:07.007 SGL Command Set: Supported 00:32:07.007 SGL Keyed: Not Supported 00:32:07.007 SGL Bit Bucket Descriptor: Not Supported 00:32:07.007 SGL Metadata Pointer: Not Supported 00:32:07.007 Oversized SGL: Not Supported 00:32:07.007 SGL Metadata Address: Not Supported 00:32:07.007 SGL Offset: Supported 00:32:07.007 Transport SGL Data Block: Not Supported 00:32:07.007 Replay Protected Memory Block: Not Supported 00:32:07.007 00:32:07.007 Firmware Slot Information 00:32:07.007 ========================= 00:32:07.007 Active slot: 0 00:32:07.007 00:32:07.007 00:32:07.007 Error Log 00:32:07.007 ========= 00:32:07.007 00:32:07.007 Active Namespaces 00:32:07.007 ================= 00:32:07.007 Discovery Log Page 00:32:07.007 ================== 00:32:07.007 Generation Counter: 2 00:32:07.007 Number of Records: 2 00:32:07.007 Record Format: 0 00:32:07.007 00:32:07.007 Discovery Log Entry 0 00:32:07.007 ---------------------- 00:32:07.007 Transport Type: 3 (TCP) 00:32:07.007 Address Family: 1 (IPv4) 00:32:07.007 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:07.007 Entry Flags: 00:32:07.007 Duplicate Returned Information: 0 00:32:07.007 Explicit Persistent Connection Support for Discovery: 0 00:32:07.007 Transport Requirements: 00:32:07.007 Secure Channel: Not Specified 00:32:07.007 Port ID: 1 (0x0001) 00:32:07.007 Controller ID: 65535 (0xffff) 00:32:07.007 Admin Max SQ Size: 32 00:32:07.007 Transport Service Identifier: 4420 00:32:07.007 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:07.007 Transport Address: 10.0.0.1 00:32:07.007 Discovery Log Entry 1 00:32:07.007 ---------------------- 00:32:07.007 Transport Type: 3 (TCP) 00:32:07.007 Address Family: 1 (IPv4) 00:32:07.007 Subsystem Type: 2 (NVM Subsystem) 00:32:07.007 Entry Flags: 00:32:07.007 Duplicate Returned Information: 0 00:32:07.007 Explicit Persistent Connection Support for Discovery: 0 00:32:07.007 Transport Requirements: 00:32:07.007 Secure Channel: Not Specified 00:32:07.007 Port ID: 1 (0x0001) 00:32:07.007 Controller ID: 65535 (0xffff) 00:32:07.007 Admin Max SQ Size: 32 00:32:07.007 Transport Service Identifier: 4420 00:32:07.007 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:07.007 Transport Address: 10.0.0.1 00:32:07.007 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:07.007 EAL: No free 2048 kB hugepages reported on node 1 00:32:07.007 get_feature(0x01) failed 00:32:07.007 get_feature(0x02) failed 00:32:07.007 get_feature(0x04) failed 00:32:07.007 ===================================================== 00:32:07.007 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:07.007 ===================================================== 00:32:07.008 Controller Capabilities/Features 00:32:07.008 ================================ 00:32:07.008 Vendor ID: 0000 00:32:07.008 Subsystem Vendor ID: 0000 00:32:07.008 Serial Number: 3dba2db2896f456fd01d 00:32:07.008 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:07.008 Firmware Version: 6.7.0-68 00:32:07.008 Recommended Arb Burst: 6 00:32:07.008 IEEE OUI Identifier: 00 00 00 00:32:07.008 Multi-path I/O 00:32:07.008 May have multiple subsystem ports: Yes 00:32:07.008 May have multiple controllers: Yes 00:32:07.008 Associated with SR-IOV VF: No 00:32:07.008 Max Data Transfer Size: Unlimited 00:32:07.008 Max Number of Namespaces: 1024 00:32:07.008 Max Number of I/O Queues: 128 00:32:07.008 NVMe Specification Version (VS): 1.3 00:32:07.008 NVMe Specification Version (Identify): 1.3 00:32:07.008 Maximum Queue Entries: 1024 00:32:07.008 Contiguous Queues Required: No 00:32:07.008 Arbitration Mechanisms Supported 00:32:07.008 Weighted Round Robin: Not Supported 00:32:07.008 Vendor Specific: Not Supported 00:32:07.008 Reset Timeout: 7500 ms 00:32:07.008 Doorbell Stride: 4 bytes 00:32:07.008 NVM Subsystem Reset: Not Supported 00:32:07.008 Command Sets Supported 00:32:07.008 NVM Command Set: Supported 00:32:07.008 Boot Partition: Not Supported 00:32:07.008 Memory Page Size Minimum: 4096 bytes 00:32:07.008 Memory Page Size Maximum: 4096 bytes 00:32:07.008 Persistent Memory Region: Not Supported 00:32:07.008 Optional Asynchronous Events Supported 00:32:07.008 Namespace Attribute Notices: Supported 00:32:07.008 Firmware Activation Notices: Not Supported 00:32:07.008 ANA Change Notices: Supported 00:32:07.008 PLE Aggregate Log Change Notices: Not Supported 00:32:07.008 LBA Status Info Alert Notices: Not Supported 00:32:07.008 EGE Aggregate Log Change Notices: Not Supported 00:32:07.008 Normal NVM Subsystem Shutdown event: Not Supported 00:32:07.008 Zone Descriptor Change Notices: Not Supported 00:32:07.008 Discovery Log Change Notices: Not Supported 00:32:07.008 Controller Attributes 00:32:07.008 128-bit Host Identifier: Supported 00:32:07.008 Non-Operational Permissive Mode: Not Supported 00:32:07.008 NVM Sets: Not Supported 00:32:07.008 Read Recovery Levels: Not Supported 00:32:07.008 Endurance Groups: Not Supported 00:32:07.008 Predictable Latency Mode: Not Supported 00:32:07.008 Traffic Based Keep ALive: Supported 00:32:07.008 Namespace Granularity: Not Supported 00:32:07.008 SQ Associations: Not Supported 00:32:07.008 UUID List: Not Supported 00:32:07.008 Multi-Domain Subsystem: Not Supported 00:32:07.008 Fixed Capacity Management: Not Supported 00:32:07.008 Variable Capacity Management: Not Supported 00:32:07.008 Delete Endurance Group: Not Supported 00:32:07.008 Delete NVM Set: Not Supported 00:32:07.008 Extended LBA Formats Supported: Not Supported 00:32:07.008 Flexible Data Placement Supported: Not Supported 00:32:07.008 00:32:07.008 Controller Memory Buffer Support 00:32:07.008 ================================ 00:32:07.008 Supported: No 00:32:07.008 00:32:07.008 Persistent Memory Region Support 00:32:07.008 ================================ 00:32:07.008 Supported: No 00:32:07.008 00:32:07.008 Admin Command Set Attributes 00:32:07.008 ============================ 00:32:07.008 Security Send/Receive: Not Supported 00:32:07.008 Format NVM: Not Supported 00:32:07.008 Firmware Activate/Download: Not Supported 00:32:07.008 Namespace Management: Not Supported 00:32:07.008 Device Self-Test: Not Supported 00:32:07.008 Directives: Not Supported 00:32:07.008 NVMe-MI: Not Supported 00:32:07.008 Virtualization Management: Not Supported 00:32:07.008 Doorbell Buffer Config: Not Supported 00:32:07.008 Get LBA Status Capability: Not Supported 00:32:07.008 Command & Feature Lockdown Capability: Not Supported 00:32:07.008 Abort Command Limit: 4 00:32:07.008 Async Event Request Limit: 4 00:32:07.008 Number of Firmware Slots: N/A 00:32:07.008 Firmware Slot 1 Read-Only: N/A 00:32:07.008 Firmware Activation Without Reset: N/A 00:32:07.008 Multiple Update Detection Support: N/A 00:32:07.008 Firmware Update Granularity: No Information Provided 00:32:07.008 Per-Namespace SMART Log: Yes 00:32:07.008 Asymmetric Namespace Access Log Page: Supported 00:32:07.008 ANA Transition Time : 10 sec 00:32:07.008 00:32:07.008 Asymmetric Namespace Access Capabilities 00:32:07.008 ANA Optimized State : Supported 00:32:07.008 ANA Non-Optimized State : Supported 00:32:07.008 ANA Inaccessible State : Supported 00:32:07.008 ANA Persistent Loss State : Supported 00:32:07.008 ANA Change State : Supported 00:32:07.008 ANAGRPID is not changed : No 00:32:07.008 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:07.008 00:32:07.008 ANA Group Identifier Maximum : 128 00:32:07.008 Number of ANA Group Identifiers : 128 00:32:07.008 Max Number of Allowed Namespaces : 1024 00:32:07.008 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:07.008 Command Effects Log Page: Supported 00:32:07.008 Get Log Page Extended Data: Supported 00:32:07.008 Telemetry Log Pages: Not Supported 00:32:07.008 Persistent Event Log Pages: Not Supported 00:32:07.008 Supported Log Pages Log Page: May Support 00:32:07.008 Commands Supported & Effects Log Page: Not Supported 00:32:07.008 Feature Identifiers & Effects Log Page:May Support 00:32:07.008 NVMe-MI Commands & Effects Log Page: May Support 00:32:07.008 Data Area 4 for Telemetry Log: Not Supported 00:32:07.008 Error Log Page Entries Supported: 128 00:32:07.008 Keep Alive: Supported 00:32:07.008 Keep Alive Granularity: 1000 ms 00:32:07.008 00:32:07.008 NVM Command Set Attributes 00:32:07.008 ========================== 00:32:07.008 Submission Queue Entry Size 00:32:07.008 Max: 64 00:32:07.008 Min: 64 00:32:07.008 Completion Queue Entry Size 00:32:07.008 Max: 16 00:32:07.008 Min: 16 00:32:07.008 Number of Namespaces: 1024 00:32:07.008 Compare Command: Not Supported 00:32:07.008 Write Uncorrectable Command: Not Supported 00:32:07.008 Dataset Management Command: Supported 00:32:07.008 Write Zeroes Command: Supported 00:32:07.008 Set Features Save Field: Not Supported 00:32:07.008 Reservations: Not Supported 00:32:07.008 Timestamp: Not Supported 00:32:07.008 Copy: Not Supported 00:32:07.008 Volatile Write Cache: Present 00:32:07.008 Atomic Write Unit (Normal): 1 00:32:07.008 Atomic Write Unit (PFail): 1 00:32:07.008 Atomic Compare & Write Unit: 1 00:32:07.008 Fused Compare & Write: Not Supported 00:32:07.008 Scatter-Gather List 00:32:07.008 SGL Command Set: Supported 00:32:07.008 SGL Keyed: Not Supported 00:32:07.008 SGL Bit Bucket Descriptor: Not Supported 00:32:07.008 SGL Metadata Pointer: Not Supported 00:32:07.008 Oversized SGL: Not Supported 00:32:07.008 SGL Metadata Address: Not Supported 00:32:07.008 SGL Offset: Supported 00:32:07.008 Transport SGL Data Block: Not Supported 00:32:07.008 Replay Protected Memory Block: Not Supported 00:32:07.008 00:32:07.008 Firmware Slot Information 00:32:07.008 ========================= 00:32:07.008 Active slot: 0 00:32:07.008 00:32:07.008 Asymmetric Namespace Access 00:32:07.008 =========================== 00:32:07.008 Change Count : 0 00:32:07.008 Number of ANA Group Descriptors : 1 00:32:07.008 ANA Group Descriptor : 0 00:32:07.008 ANA Group ID : 1 00:32:07.008 Number of NSID Values : 1 00:32:07.008 Change Count : 0 00:32:07.008 ANA State : 1 00:32:07.008 Namespace Identifier : 1 00:32:07.008 00:32:07.008 Commands Supported and Effects 00:32:07.008 ============================== 00:32:07.008 Admin Commands 00:32:07.008 -------------- 00:32:07.008 Get Log Page (02h): Supported 00:32:07.008 Identify (06h): Supported 00:32:07.008 Abort (08h): Supported 00:32:07.008 Set Features (09h): Supported 00:32:07.008 Get Features (0Ah): Supported 00:32:07.008 Asynchronous Event Request (0Ch): Supported 00:32:07.008 Keep Alive (18h): Supported 00:32:07.008 I/O Commands 00:32:07.008 ------------ 00:32:07.008 Flush (00h): Supported 00:32:07.008 Write (01h): Supported LBA-Change 00:32:07.008 Read (02h): Supported 00:32:07.008 Write Zeroes (08h): Supported LBA-Change 00:32:07.008 Dataset Management (09h): Supported 00:32:07.008 00:32:07.008 Error Log 00:32:07.008 ========= 00:32:07.008 Entry: 0 00:32:07.008 Error Count: 0x3 00:32:07.008 Submission Queue Id: 0x0 00:32:07.008 Command Id: 0x5 00:32:07.008 Phase Bit: 0 00:32:07.008 Status Code: 0x2 00:32:07.008 Status Code Type: 0x0 00:32:07.008 Do Not Retry: 1 00:32:07.008 Error Location: 0x28 00:32:07.008 LBA: 0x0 00:32:07.008 Namespace: 0x0 00:32:07.008 Vendor Log Page: 0x0 00:32:07.008 ----------- 00:32:07.008 Entry: 1 00:32:07.008 Error Count: 0x2 00:32:07.008 Submission Queue Id: 0x0 00:32:07.008 Command Id: 0x5 00:32:07.008 Phase Bit: 0 00:32:07.008 Status Code: 0x2 00:32:07.008 Status Code Type: 0x0 00:32:07.008 Do Not Retry: 1 00:32:07.008 Error Location: 0x28 00:32:07.008 LBA: 0x0 00:32:07.008 Namespace: 0x0 00:32:07.008 Vendor Log Page: 0x0 00:32:07.008 ----------- 00:32:07.008 Entry: 2 00:32:07.008 Error Count: 0x1 00:32:07.008 Submission Queue Id: 0x0 00:32:07.009 Command Id: 0x4 00:32:07.009 Phase Bit: 0 00:32:07.009 Status Code: 0x2 00:32:07.009 Status Code Type: 0x0 00:32:07.009 Do Not Retry: 1 00:32:07.009 Error Location: 0x28 00:32:07.009 LBA: 0x0 00:32:07.009 Namespace: 0x0 00:32:07.009 Vendor Log Page: 0x0 00:32:07.009 00:32:07.009 Number of Queues 00:32:07.009 ================ 00:32:07.009 Number of I/O Submission Queues: 128 00:32:07.009 Number of I/O Completion Queues: 128 00:32:07.009 00:32:07.009 ZNS Specific Controller Data 00:32:07.009 ============================ 00:32:07.009 Zone Append Size Limit: 0 00:32:07.009 00:32:07.009 00:32:07.009 Active Namespaces 00:32:07.009 ================= 00:32:07.009 get_feature(0x05) failed 00:32:07.009 Namespace ID:1 00:32:07.009 Command Set Identifier: NVM (00h) 00:32:07.009 Deallocate: Supported 00:32:07.009 Deallocated/Unwritten Error: Not Supported 00:32:07.009 Deallocated Read Value: Unknown 00:32:07.009 Deallocate in Write Zeroes: Not Supported 00:32:07.009 Deallocated Guard Field: 0xFFFF 00:32:07.009 Flush: Supported 00:32:07.009 Reservation: Not Supported 00:32:07.009 Namespace Sharing Capabilities: Multiple Controllers 00:32:07.009 Size (in LBAs): 1953525168 (931GiB) 00:32:07.009 Capacity (in LBAs): 1953525168 (931GiB) 00:32:07.009 Utilization (in LBAs): 1953525168 (931GiB) 00:32:07.009 UUID: 5dabbd0f-6214-45fd-8f1b-d7343b754069 00:32:07.009 Thin Provisioning: Not Supported 00:32:07.009 Per-NS Atomic Units: Yes 00:32:07.009 Atomic Boundary Size (Normal): 0 00:32:07.009 Atomic Boundary Size (PFail): 0 00:32:07.009 Atomic Boundary Offset: 0 00:32:07.009 NGUID/EUI64 Never Reused: No 00:32:07.009 ANA group ID: 1 00:32:07.009 Namespace Write Protected: No 00:32:07.009 Number of LBA Formats: 1 00:32:07.009 Current LBA Format: LBA Format #00 00:32:07.009 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:07.009 00:32:07.009 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:07.009 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:07.009 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:07.009 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:07.009 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:07.009 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:07.009 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:07.009 rmmod nvme_tcp 00:32:07.009 rmmod nvme_fabrics 00:32:07.009 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:07.009 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:07.009 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:07.009 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:07.009 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:07.009 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:07.009 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:07.009 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:07.009 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:07.009 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.009 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:07.009 08:18:58 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.541 08:19:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:09.541 08:19:00 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:09.541 08:19:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:09.541 08:19:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:09.541 08:19:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:09.541 08:19:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:09.541 08:19:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:09.541 08:19:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:09.541 08:19:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:09.541 08:19:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:09.541 08:19:00 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:10.478 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:10.478 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:10.478 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:10.478 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:10.478 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:10.478 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:10.478 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:10.478 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:10.478 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:10.478 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:10.478 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:10.478 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:10.478 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:10.478 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:10.478 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:10.478 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:11.418 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:11.676 00:32:11.676 real 0m9.387s 00:32:11.676 user 0m1.954s 00:32:11.676 sys 0m3.381s 00:32:11.676 08:19:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:11.676 08:19:03 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:11.676 ************************************ 00:32:11.676 END TEST nvmf_identify_kernel_target 00:32:11.676 ************************************ 00:32:11.676 08:19:03 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:11.676 08:19:03 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:11.676 08:19:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:11.676 08:19:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:11.676 08:19:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:11.676 ************************************ 00:32:11.676 START TEST nvmf_auth_host 00:32:11.676 ************************************ 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:11.676 * Looking for test storage... 00:32:11.676 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:11.676 08:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:11.677 08:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:11.677 08:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:11.677 08:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:11.677 08:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:11.677 08:19:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:11.677 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:11.677 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:11.677 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:11.677 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:11.677 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:11.677 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.677 08:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:11.677 08:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:11.677 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:11.677 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:11.677 08:19:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:11.677 08:19:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:13.585 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:13.585 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:13.585 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:13.585 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:13.585 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:13.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:13.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:32:13.586 00:32:13.586 --- 10.0.0.2 ping statistics --- 00:32:13.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.586 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:13.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:13.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:32:13.586 00:32:13.586 --- 10.0.0.1 ping statistics --- 00:32:13.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:13.586 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2094860 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2094860 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2094860 ']' 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:13.586 08:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=005a78cbb401b8747ae7fdabe9acd262 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Te6 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 005a78cbb401b8747ae7fdabe9acd262 0 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 005a78cbb401b8747ae7fdabe9acd262 0 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=005a78cbb401b8747ae7fdabe9acd262 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Te6 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Te6 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Te6 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7b8996775d48abf824ac97493c26f37b096f671346a9b7bb39768e65995106c7 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.D45 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7b8996775d48abf824ac97493c26f37b096f671346a9b7bb39768e65995106c7 3 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7b8996775d48abf824ac97493c26f37b096f671346a9b7bb39768e65995106c7 3 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7b8996775d48abf824ac97493c26f37b096f671346a9b7bb39768e65995106c7 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.D45 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.D45 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.D45 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ce4e7a34eff9b103df1811893e4d28b56b16965bee4d2aa9 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.JT9 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ce4e7a34eff9b103df1811893e4d28b56b16965bee4d2aa9 0 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ce4e7a34eff9b103df1811893e4d28b56b16965bee4d2aa9 0 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ce4e7a34eff9b103df1811893e4d28b56b16965bee4d2aa9 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.JT9 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.JT9 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.JT9 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=44e42a5ce53103ed97a965fe7499dc8a7cd4a652e8d2ad46 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.spU 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 44e42a5ce53103ed97a965fe7499dc8a7cd4a652e8d2ad46 2 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 44e42a5ce53103ed97a965fe7499dc8a7cd4a652e8d2ad46 2 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=44e42a5ce53103ed97a965fe7499dc8a7cd4a652e8d2ad46 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.spU 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.spU 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.spU 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=119d381e0a085a37c67cb943859433cc 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.QnV 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 119d381e0a085a37c67cb943859433cc 1 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 119d381e0a085a37c67cb943859433cc 1 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=119d381e0a085a37c67cb943859433cc 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:14.153 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.QnV 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.QnV 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.QnV 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8f67e2be7bbcffaca21cf92b3d5a8378 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.GpE 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8f67e2be7bbcffaca21cf92b3d5a8378 1 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8f67e2be7bbcffaca21cf92b3d5a8378 1 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8f67e2be7bbcffaca21cf92b3d5a8378 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.GpE 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.GpE 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.GpE 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=90c6ecb948040daa28d5c0af5810440a3bf71d788efb9ab7 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.O20 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 90c6ecb948040daa28d5c0af5810440a3bf71d788efb9ab7 2 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 90c6ecb948040daa28d5c0af5810440a3bf71d788efb9ab7 2 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=90c6ecb948040daa28d5c0af5810440a3bf71d788efb9ab7 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:14.412 08:19:05 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.O20 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.O20 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.O20 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cdd1ccd00aff7c38929477296f6c9060 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.PYE 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cdd1ccd00aff7c38929477296f6c9060 0 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cdd1ccd00aff7c38929477296f6c9060 0 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cdd1ccd00aff7c38929477296f6c9060 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.PYE 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.PYE 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.PYE 00:32:14.412 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8d37d41ef5f7eebe7122f20dcaf8c32b280df386dc18374845d893470fbbacd2 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.fGh 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8d37d41ef5f7eebe7122f20dcaf8c32b280df386dc18374845d893470fbbacd2 3 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8d37d41ef5f7eebe7122f20dcaf8c32b280df386dc18374845d893470fbbacd2 3 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8d37d41ef5f7eebe7122f20dcaf8c32b280df386dc18374845d893470fbbacd2 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.fGh 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.fGh 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.fGh 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2094860 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 2094860 ']' 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:14.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:14.413 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.672 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:14.672 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:14.672 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:14.672 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Te6 00:32:14.672 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.672 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.932 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.932 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.D45 ]] 00:32:14.932 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.D45 00:32:14.932 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.932 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.932 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.932 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.JT9 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.spU ]] 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.spU 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.QnV 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.GpE ]] 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.GpE 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.O20 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.PYE ]] 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.PYE 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.fGh 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:14.933 08:19:06 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:16.308 Waiting for block devices as requested 00:32:16.308 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:16.308 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:16.308 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:16.308 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:16.566 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:16.566 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:16.566 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:16.566 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:16.824 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:16.824 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:16.824 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:16.824 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:16.824 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:17.082 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:17.082 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:17.082 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:17.082 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:17.649 No valid GPT data, bailing 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:17.649 00:32:17.649 Discovery Log Number of Records 2, Generation counter 2 00:32:17.649 =====Discovery Log Entry 0====== 00:32:17.649 trtype: tcp 00:32:17.649 adrfam: ipv4 00:32:17.649 subtype: current discovery subsystem 00:32:17.649 treq: not specified, sq flow control disable supported 00:32:17.649 portid: 1 00:32:17.649 trsvcid: 4420 00:32:17.649 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:17.649 traddr: 10.0.0.1 00:32:17.649 eflags: none 00:32:17.649 sectype: none 00:32:17.649 =====Discovery Log Entry 1====== 00:32:17.649 trtype: tcp 00:32:17.649 adrfam: ipv4 00:32:17.649 subtype: nvme subsystem 00:32:17.649 treq: not specified, sq flow control disable supported 00:32:17.649 portid: 1 00:32:17.649 trsvcid: 4420 00:32:17.649 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:17.649 traddr: 10.0.0.1 00:32:17.649 eflags: none 00:32:17.649 sectype: none 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: ]] 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.649 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.907 nvme0n1 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: ]] 00:32:17.907 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.908 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.164 nvme0n1 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: ]] 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.164 08:19:09 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.421 nvme0n1 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: ]] 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.421 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.680 nvme0n1 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: ]] 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.680 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.938 nvme0n1 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.938 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.196 nvme0n1 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: ]] 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.196 nvme0n1 00:32:19.196 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.454 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.454 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.454 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.454 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.454 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.454 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: ]] 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.455 08:19:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.455 nvme0n1 00:32:19.455 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.455 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.455 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.713 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.713 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.713 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.713 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.713 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.713 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.713 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.713 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.713 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.713 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:19.713 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: ]] 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.714 nvme0n1 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.714 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: ]] 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.972 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.229 nvme0n1 00:32:20.229 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.229 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.229 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.229 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.229 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.229 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.230 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.488 nvme0n1 00:32:20.488 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.488 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.488 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.488 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.488 08:19:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.488 08:19:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: ]] 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.488 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.747 nvme0n1 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: ]] 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:20.747 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.006 nvme0n1 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: ]] 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.006 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.569 nvme0n1 00:32:21.569 08:19:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.569 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.569 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.569 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: ]] 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.570 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.827 nvme0n1 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:21.827 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:21.828 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:21.828 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.828 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:21.828 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.828 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:21.828 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:21.828 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:21.828 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:21.828 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:21.828 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:21.828 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:21.828 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:21.828 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:21.828 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:21.828 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:21.828 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:21.828 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.828 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.086 nvme0n1 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: ]] 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.086 08:19:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.652 nvme0n1 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: ]] 00:32:22.652 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.910 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.476 nvme0n1 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: ]] 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.476 08:19:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.042 nvme0n1 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: ]] 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.042 08:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.043 08:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.043 08:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.043 08:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.043 08:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.043 08:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.043 08:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.043 08:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.043 08:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.043 08:19:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.043 08:19:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:24.043 08:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.043 08:19:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.607 nvme0n1 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.607 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.173 nvme0n1 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: ]] 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:25.173 08:19:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.105 nvme0n1 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:26.105 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: ]] 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:26.106 08:19:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.038 nvme0n1 00:32:27.038 08:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.038 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:27.038 08:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.038 08:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.038 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:27.038 08:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.303 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:27.303 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:27.303 08:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.303 08:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.303 08:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.303 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:27.303 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: ]] 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:27.304 08:19:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.235 nvme0n1 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: ]] 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:28.235 08:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:28.236 08:19:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:28.236 08:19:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:28.236 08:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:28.236 08:19:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.168 nvme0n1 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:29.168 08:19:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.542 nvme0n1 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: ]] 00:32:30.542 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.543 08:19:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.543 nvme0n1 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: ]] 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.543 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.801 nvme0n1 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: ]] 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.801 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.802 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:30.802 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:30.802 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:30.802 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:30.802 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:30.802 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:30.802 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:30.802 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:30.802 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:30.802 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:30.802 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:30.802 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:30.802 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:30.802 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:30.802 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.060 nvme0n1 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: ]] 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.060 nvme0n1 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.060 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.321 nvme0n1 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.321 08:19:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: ]] 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:31.321 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.322 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:31.322 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.322 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.581 nvme0n1 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: ]] 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.581 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.838 nvme0n1 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: ]] 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:31.838 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.096 nvme0n1 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: ]] 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.096 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.352 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.352 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.352 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.352 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.352 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.352 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.352 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.352 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.352 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.352 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.352 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.352 08:19:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.352 08:19:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:32.353 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.353 08:19:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.353 nvme0n1 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.353 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.610 nvme0n1 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.610 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: ]] 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:32.868 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.127 nvme0n1 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: ]] 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.127 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.385 nvme0n1 00:32:33.385 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.386 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.386 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.386 08:19:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.386 08:19:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: ]] 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.386 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.644 nvme0n1 00:32:33.644 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.644 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:33.644 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.644 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:33.644 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.644 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.902 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:33.902 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:33.902 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: ]] 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:33.903 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.162 nvme0n1 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.162 08:19:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.420 nvme0n1 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:34.420 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:34.421 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:34.421 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: ]] 00:32:34.421 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:34.421 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:34.421 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:34.421 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:34.421 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:34.421 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:34.421 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:34.421 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:34.421 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.421 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.679 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:34.679 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:34.679 08:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:34.679 08:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:34.679 08:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:34.679 08:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:34.679 08:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:34.679 08:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:34.679 08:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:34.679 08:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:34.679 08:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:34.679 08:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:34.679 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:34.679 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:34.679 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.245 nvme0n1 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: ]] 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.245 08:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.246 08:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.246 08:19:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.246 08:19:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:35.246 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.246 08:19:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.813 nvme0n1 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: ]] 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:35.813 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:35.814 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.379 nvme0n1 00:32:36.379 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.379 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.379 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.379 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.379 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.379 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.379 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: ]] 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.380 08:19:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.946 nvme0n1 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:36.946 08:19:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.514 nvme0n1 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: ]] 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.514 08:19:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.888 nvme0n1 00:32:38.888 08:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.888 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:38.888 08:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.888 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:38.888 08:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.888 08:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.888 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:38.888 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:38.888 08:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.888 08:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.888 08:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.888 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:38.888 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:32:38.888 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:38.888 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: ]] 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:38.889 08:19:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.825 nvme0n1 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: ]] 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.825 08:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.826 08:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.826 08:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.826 08:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.826 08:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.826 08:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.826 08:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.826 08:19:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.826 08:19:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:39.826 08:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.826 08:19:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.777 nvme0n1 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:40.777 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: ]] 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:40.778 08:19:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.710 nvme0n1 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:41.710 08:19:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.661 nvme0n1 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: ]] 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.661 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.919 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.919 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.919 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.919 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.919 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.919 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.919 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.919 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.919 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.920 nvme0n1 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: ]] 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.920 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.178 nvme0n1 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: ]] 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.178 08:19:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.437 nvme0n1 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: ]] 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.437 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.695 nvme0n1 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:32:43.695 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.696 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.696 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.696 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.696 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.696 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.696 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.696 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.696 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.696 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.696 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.696 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.696 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.696 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.696 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:43.696 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.696 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.954 nvme0n1 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: ]] 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.954 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.955 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.955 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.955 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.955 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.955 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.955 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.955 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:43.955 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.955 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.212 nvme0n1 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: ]] 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.212 08:19:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.470 nvme0n1 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: ]] 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.470 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.728 nvme0n1 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: ]] 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.728 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.986 nvme0n1 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.986 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.987 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.987 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.987 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:44.987 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.987 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.244 nvme0n1 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: ]] 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.244 08:19:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.502 nvme0n1 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: ]] 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.502 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.067 nvme0n1 00:32:46.067 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.067 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.067 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.067 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.067 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.067 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.067 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: ]] 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.068 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.326 nvme0n1 00:32:46.326 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.326 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.326 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.326 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.326 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.326 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.326 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.326 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.326 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.326 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.326 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.326 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.326 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:32:46.326 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.326 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:46.326 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:46.326 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: ]] 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.327 08:19:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.585 nvme0n1 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.585 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.842 nvme0n1 00:32:46.842 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.842 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.842 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.842 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.842 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: ]] 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.100 08:19:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.666 nvme0n1 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: ]] 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.666 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.232 nvme0n1 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: ]] 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.232 08:19:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.798 nvme0n1 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: ]] 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:48.798 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.799 08:19:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.363 nvme0n1 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.364 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.929 nvme0n1 00:32:49.929 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.929 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.929 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.929 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.929 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.929 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDA1YTc4Y2JiNDAxYjg3NDdhZTdmZGFiZTlhY2QyNjJnMomG: 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: ]] 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:N2I4OTk2Nzc1ZDQ4YWJmODI0YWM5NzQ5M2MyNmYzN2IwOTZmNjcxMzQ2YTliN2JiMzk3NjhlNjU5OTUxMDZjNzPBi8Q=: 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.187 08:19:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.121 nvme0n1 00:32:51.121 08:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.121 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.121 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.121 08:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.121 08:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.121 08:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.121 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.121 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.121 08:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.121 08:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.121 08:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.121 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: ]] 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.122 08:19:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.054 nvme0n1 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MTE5ZDM4MWUwYTA4NWEzN2M2N2NiOTQzODU5NDMzY2Ot+1Qd: 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: ]] 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:OGY2N2UyYmU3YmJjZmZhY2EyMWNmOTJiM2Q1YTgzNziM9L+p: 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.054 08:19:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.990 nvme0n1 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:OTBjNmVjYjk0ODA0MGRhYTI4ZDVjMGFmNTgxMDQ0MGEzYmY3MWQ3ODhlZmI5YWI3sB0YlA==: 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: ]] 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2RkMWNjZDAwYWZmN2MzODkyOTQ3NzI5NmY2YzkwNjAocmBj: 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.990 08:19:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.921 nvme0n1 00:32:53.921 08:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.921 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.921 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.921 08:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.921 08:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.178 08:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.178 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGQzN2Q0MWVmNWY3ZWViZTcxMjJmMjBkY2FmOGMzMmIyODBkZjM4NmRjMTgzNzQ4NDVkODkzNDcwZmJiYWNkMndWE5U=: 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.179 08:19:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.112 nvme0n1 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2U0ZTdhMzRlZmY5YjEwM2RmMTgxMTg5M2U0ZDI4YjU2YjE2OTY1YmVlNGQyYWE5FW01Tg==: 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: ]] 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDRlNDJhNWNlNTMxMDNlZDk3YTk2NWZlNzQ5OWRjOGE3Y2Q0YTY1MmU4ZDJhZDQ2ecc0+w==: 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:55.112 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:55.113 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:55.113 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:55.113 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:55.113 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:55.113 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:55.113 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:32:55.113 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.113 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.113 request: 00:32:55.113 { 00:32:55.113 "name": "nvme0", 00:32:55.113 "trtype": "tcp", 00:32:55.113 "traddr": "10.0.0.1", 00:32:55.113 "adrfam": "ipv4", 00:32:55.113 "trsvcid": "4420", 00:32:55.113 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:55.113 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:55.113 "prchk_reftag": false, 00:32:55.113 "prchk_guard": false, 00:32:55.113 "hdgst": false, 00:32:55.113 "ddgst": false, 00:32:55.113 "method": "bdev_nvme_attach_controller", 00:32:55.113 "req_id": 1 00:32:55.113 } 00:32:55.113 Got JSON-RPC error response 00:32:55.113 response: 00:32:55.113 { 00:32:55.113 "code": -5, 00:32:55.113 "message": "Input/output error" 00:32:55.113 } 00:32:55.113 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:55.113 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:55.113 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:55.113 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:55.113 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:55.113 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.113 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:32:55.113 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.113 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.113 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.371 request: 00:32:55.371 { 00:32:55.371 "name": "nvme0", 00:32:55.371 "trtype": "tcp", 00:32:55.371 "traddr": "10.0.0.1", 00:32:55.371 "adrfam": "ipv4", 00:32:55.371 "trsvcid": "4420", 00:32:55.371 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:55.371 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:55.371 "prchk_reftag": false, 00:32:55.371 "prchk_guard": false, 00:32:55.371 "hdgst": false, 00:32:55.371 "ddgst": false, 00:32:55.371 "dhchap_key": "key2", 00:32:55.371 "method": "bdev_nvme_attach_controller", 00:32:55.371 "req_id": 1 00:32:55.371 } 00:32:55.371 Got JSON-RPC error response 00:32:55.371 response: 00:32:55.371 { 00:32:55.371 "code": -5, 00:32:55.371 "message": "Input/output error" 00:32:55.371 } 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:32:55.371 08:19:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.371 08:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:32:55.371 08:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:32:55.371 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.371 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.371 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.371 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.371 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.372 request: 00:32:55.372 { 00:32:55.372 "name": "nvme0", 00:32:55.372 "trtype": "tcp", 00:32:55.372 "traddr": "10.0.0.1", 00:32:55.372 "adrfam": "ipv4", 00:32:55.372 "trsvcid": "4420", 00:32:55.372 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:32:55.372 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:32:55.372 "prchk_reftag": false, 00:32:55.372 "prchk_guard": false, 00:32:55.372 "hdgst": false, 00:32:55.372 "ddgst": false, 00:32:55.372 "dhchap_key": "key1", 00:32:55.372 "dhchap_ctrlr_key": "ckey2", 00:32:55.372 "method": "bdev_nvme_attach_controller", 00:32:55.372 "req_id": 1 00:32:55.372 } 00:32:55.372 Got JSON-RPC error response 00:32:55.372 response: 00:32:55.372 { 00:32:55.372 "code": -5, 00:32:55.372 "message": "Input/output error" 00:32:55.372 } 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:55.372 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:55.372 rmmod nvme_tcp 00:32:55.631 rmmod nvme_fabrics 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2094860 ']' 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2094860 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 2094860 ']' 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 2094860 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2094860 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2094860' 00:32:55.631 killing process with pid 2094860 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 2094860 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 2094860 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:55.631 08:19:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:58.164 08:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:58.164 08:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:58.164 08:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:58.164 08:19:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:32:58.164 08:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:32:58.164 08:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:32:58.164 08:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:58.164 08:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:58.164 08:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:58.164 08:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:58.164 08:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:58.164 08:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:58.164 08:19:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:59.099 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:59.099 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:59.099 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:59.099 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:59.099 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:59.099 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:59.099 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:59.099 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:59.099 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:59.099 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:59.099 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:59.099 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:59.099 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:59.099 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:59.099 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:59.099 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:00.034 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:00.034 08:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Te6 /tmp/spdk.key-null.JT9 /tmp/spdk.key-sha256.QnV /tmp/spdk.key-sha384.O20 /tmp/spdk.key-sha512.fGh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:00.034 08:19:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:01.409 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:01.409 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:01.409 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:01.409 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:01.409 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:01.409 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:01.409 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:01.409 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:01.409 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:01.409 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:01.409 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:01.409 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:01.409 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:01.409 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:01.409 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:01.409 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:01.409 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:01.409 00:33:01.409 real 0m49.809s 00:33:01.409 user 0m47.699s 00:33:01.409 sys 0m5.754s 00:33:01.409 08:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:01.409 08:19:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.409 ************************************ 00:33:01.409 END TEST nvmf_auth_host 00:33:01.409 ************************************ 00:33:01.409 08:19:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:01.409 08:19:53 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:01.409 08:19:53 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:01.409 08:19:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:01.409 08:19:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:01.409 08:19:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:01.409 ************************************ 00:33:01.409 START TEST nvmf_digest 00:33:01.409 ************************************ 00:33:01.409 08:19:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:01.409 * Looking for test storage... 00:33:01.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:01.409 08:19:53 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:01.409 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:01.409 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:01.409 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:01.409 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:01.409 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:01.409 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:01.409 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:01.409 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:01.409 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:01.409 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:01.409 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:01.409 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:01.409 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:01.410 08:19:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:03.310 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:03.310 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:03.310 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:03.311 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:03.311 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:03.311 08:19:54 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:03.311 08:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:03.311 08:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:03.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:03.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:33:03.569 00:33:03.569 --- 10.0.0.2 ping statistics --- 00:33:03.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:03.569 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:03.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:03.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:33:03.569 00:33:03.569 --- 10.0.0.1 ping statistics --- 00:33:03.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:03.569 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:03.569 ************************************ 00:33:03.569 START TEST nvmf_digest_clean 00:33:03.569 ************************************ 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2104402 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2104402 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2104402 ']' 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:03.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:03.569 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:03.569 [2024-07-13 08:19:55.214430] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:03.569 [2024-07-13 08:19:55.214517] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:03.569 EAL: No free 2048 kB hugepages reported on node 1 00:33:03.569 [2024-07-13 08:19:55.278932] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:03.826 [2024-07-13 08:19:55.364155] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:03.826 [2024-07-13 08:19:55.364205] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:03.826 [2024-07-13 08:19:55.364220] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:03.826 [2024-07-13 08:19:55.364232] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:03.826 [2024-07-13 08:19:55.364243] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:03.826 [2024-07-13 08:19:55.364274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:03.826 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:03.826 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:03.826 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:03.826 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:03.826 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:03.826 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:03.826 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:03.826 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:03.826 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:03.826 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.826 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:04.084 null0 00:33:04.084 [2024-07-13 08:19:55.604357] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:04.084 [2024-07-13 08:19:55.628551] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:04.084 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.084 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:04.084 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:04.084 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:04.084 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:04.084 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:04.084 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:04.084 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:04.084 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2104429 00:33:04.084 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2104429 /var/tmp/bperf.sock 00:33:04.084 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2104429 ']' 00:33:04.084 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:04.084 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:04.084 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:04.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:04.084 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:04.084 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:04.084 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:04.084 [2024-07-13 08:19:55.675436] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:04.084 [2024-07-13 08:19:55.675508] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2104429 ] 00:33:04.084 EAL: No free 2048 kB hugepages reported on node 1 00:33:04.084 [2024-07-13 08:19:55.733781] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:04.342 [2024-07-13 08:19:55.820548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:04.342 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:04.342 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:04.342 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:04.342 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:04.342 08:19:55 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:04.599 08:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:04.599 08:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:05.163 nvme0n1 00:33:05.163 08:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:05.163 08:19:56 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:05.163 Running I/O for 2 seconds... 00:33:07.691 00:33:07.691 Latency(us) 00:33:07.691 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.691 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:07.691 nvme0n1 : 2.00 18540.74 72.42 0.00 0.00 6894.93 3835.07 21262.79 00:33:07.691 =================================================================================================================== 00:33:07.691 Total : 18540.74 72.42 0.00 0.00 6894.93 3835.07 21262.79 00:33:07.691 0 00:33:07.691 08:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:07.691 08:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:07.691 08:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:07.691 08:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:07.691 | select(.opcode=="crc32c") 00:33:07.691 | "\(.module_name) \(.executed)"' 00:33:07.691 08:19:58 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2104429 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2104429 ']' 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2104429 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2104429 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2104429' 00:33:07.691 killing process with pid 2104429 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2104429 00:33:07.691 Received shutdown signal, test time was about 2.000000 seconds 00:33:07.691 00:33:07.691 Latency(us) 00:33:07.691 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.691 =================================================================================================================== 00:33:07.691 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2104429 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2104833 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2104833 /var/tmp/bperf.sock 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2104833 ']' 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:07.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:07.691 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:07.691 [2024-07-13 08:19:59.374652] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:07.691 [2024-07-13 08:19:59.374732] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2104833 ] 00:33:07.691 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:07.691 Zero copy mechanism will not be used. 00:33:07.691 EAL: No free 2048 kB hugepages reported on node 1 00:33:07.957 [2024-07-13 08:19:59.436591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:07.957 [2024-07-13 08:19:59.523376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:07.957 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:07.957 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:07.957 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:07.957 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:07.957 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:08.216 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:08.216 08:19:59 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:08.781 nvme0n1 00:33:08.781 08:20:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:08.781 08:20:00 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:08.781 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:08.781 Zero copy mechanism will not be used. 00:33:08.781 Running I/O for 2 seconds... 00:33:10.679 00:33:10.679 Latency(us) 00:33:10.679 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.679 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:10.679 nvme0n1 : 2.00 3405.23 425.65 0.00 0.00 4693.73 1462.42 7573.05 00:33:10.679 =================================================================================================================== 00:33:10.679 Total : 3405.23 425.65 0.00 0.00 4693.73 1462.42 7573.05 00:33:10.679 0 00:33:10.679 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:10.679 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:10.679 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:10.679 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:10.679 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:10.679 | select(.opcode=="crc32c") 00:33:10.679 | "\(.module_name) \(.executed)"' 00:33:10.937 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:10.937 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:10.937 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:10.937 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:10.937 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2104833 00:33:10.937 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2104833 ']' 00:33:10.937 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2104833 00:33:10.937 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:10.937 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:10.937 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2104833 00:33:10.937 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:10.937 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:10.937 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2104833' 00:33:10.937 killing process with pid 2104833 00:33:10.937 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2104833 00:33:10.937 Received shutdown signal, test time was about 2.000000 seconds 00:33:10.937 00:33:10.937 Latency(us) 00:33:10.937 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:10.937 =================================================================================================================== 00:33:10.937 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:10.937 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2104833 00:33:11.194 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:11.194 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:11.194 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:11.194 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:11.194 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:11.194 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:11.194 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:11.194 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2105240 00:33:11.194 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2105240 /var/tmp/bperf.sock 00:33:11.194 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:11.194 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2105240 ']' 00:33:11.194 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:11.194 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:11.194 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:11.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:11.194 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:11.194 08:20:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:11.194 [2024-07-13 08:20:02.896747] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:11.194 [2024-07-13 08:20:02.896832] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2105240 ] 00:33:11.194 EAL: No free 2048 kB hugepages reported on node 1 00:33:11.452 [2024-07-13 08:20:02.962767] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.452 [2024-07-13 08:20:03.058171] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:11.452 08:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:11.452 08:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:11.452 08:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:11.452 08:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:11.452 08:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:12.017 08:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:12.017 08:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:12.294 nvme0n1 00:33:12.294 08:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:12.294 08:20:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:12.551 Running I/O for 2 seconds... 00:33:14.445 00:33:14.445 Latency(us) 00:33:14.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:14.445 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:14.445 nvme0n1 : 2.00 20925.45 81.74 0.00 0.00 6106.66 2754.94 15922.82 00:33:14.445 =================================================================================================================== 00:33:14.445 Total : 20925.45 81.74 0.00 0.00 6106.66 2754.94 15922.82 00:33:14.445 0 00:33:14.445 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:14.445 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:14.445 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:14.445 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:14.445 | select(.opcode=="crc32c") 00:33:14.445 | "\(.module_name) \(.executed)"' 00:33:14.445 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:14.702 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:14.702 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:14.702 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:14.702 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:14.702 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2105240 00:33:14.702 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2105240 ']' 00:33:14.702 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2105240 00:33:14.702 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:14.702 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:14.702 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2105240 00:33:14.702 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:14.703 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:14.703 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2105240' 00:33:14.703 killing process with pid 2105240 00:33:14.703 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2105240 00:33:14.703 Received shutdown signal, test time was about 2.000000 seconds 00:33:14.703 00:33:14.703 Latency(us) 00:33:14.703 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:14.703 =================================================================================================================== 00:33:14.703 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:14.703 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2105240 00:33:14.960 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:14.960 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:14.960 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:14.960 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:14.960 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:14.960 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:14.960 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:14.960 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2105769 00:33:14.960 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2105769 /var/tmp/bperf.sock 00:33:14.960 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:14.960 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 2105769 ']' 00:33:14.960 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:14.960 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:14.960 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:14.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:14.960 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:14.960 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:14.960 [2024-07-13 08:20:06.620679] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:14.960 [2024-07-13 08:20:06.620758] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2105769 ] 00:33:14.960 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:14.960 Zero copy mechanism will not be used. 00:33:14.960 EAL: No free 2048 kB hugepages reported on node 1 00:33:14.960 [2024-07-13 08:20:06.679259] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.218 [2024-07-13 08:20:06.765079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:15.218 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:15.218 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:15.218 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:15.218 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:15.218 08:20:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:15.476 08:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:15.476 08:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:16.041 nvme0n1 00:33:16.041 08:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:16.041 08:20:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:16.041 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:16.041 Zero copy mechanism will not be used. 00:33:16.041 Running I/O for 2 seconds... 00:33:18.591 00:33:18.591 Latency(us) 00:33:18.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.591 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:18.591 nvme0n1 : 2.01 2925.36 365.67 0.00 0.00 5456.79 3665.16 14660.65 00:33:18.591 =================================================================================================================== 00:33:18.591 Total : 2925.36 365.67 0.00 0.00 5456.79 3665.16 14660.65 00:33:18.591 0 00:33:18.591 08:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:18.591 08:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:18.591 08:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:18.591 08:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:18.591 08:20:09 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:18.591 | select(.opcode=="crc32c") 00:33:18.591 | "\(.module_name) \(.executed)"' 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2105769 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2105769 ']' 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2105769 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2105769 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2105769' 00:33:18.591 killing process with pid 2105769 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2105769 00:33:18.591 Received shutdown signal, test time was about 2.000000 seconds 00:33:18.591 00:33:18.591 Latency(us) 00:33:18.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:18.591 =================================================================================================================== 00:33:18.591 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2105769 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2104402 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 2104402 ']' 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 2104402 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2104402 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2104402' 00:33:18.591 killing process with pid 2104402 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 2104402 00:33:18.591 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 2104402 00:33:18.850 00:33:18.850 real 0m15.356s 00:33:18.850 user 0m30.009s 00:33:18.850 sys 0m4.288s 00:33:18.850 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:18.850 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:18.850 ************************************ 00:33:18.850 END TEST nvmf_digest_clean 00:33:18.850 ************************************ 00:33:18.850 08:20:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:33:18.850 08:20:10 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:18.850 08:20:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:18.850 08:20:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:18.850 08:20:10 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:18.850 ************************************ 00:33:18.850 START TEST nvmf_digest_error 00:33:18.850 ************************************ 00:33:18.850 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:33:18.850 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:18.850 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:18.850 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:18.850 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:18.850 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2106206 00:33:18.850 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:18.850 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2106206 00:33:18.850 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2106206 ']' 00:33:18.850 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:18.850 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:18.850 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:18.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:18.851 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:18.851 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:19.109 [2024-07-13 08:20:10.622187] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:19.109 [2024-07-13 08:20:10.622266] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:19.109 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.109 [2024-07-13 08:20:10.687600] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.109 [2024-07-13 08:20:10.776225] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:19.109 [2024-07-13 08:20:10.776290] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:19.109 [2024-07-13 08:20:10.776306] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:19.109 [2024-07-13 08:20:10.776320] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:19.109 [2024-07-13 08:20:10.776332] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:19.109 [2024-07-13 08:20:10.776370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:19.109 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:19.109 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:19.109 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:19.109 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:19.109 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:19.109 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:19.109 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:19.109 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.109 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:19.368 [2024-07-13 08:20:10.844965] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:19.368 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.368 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:19.368 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:19.368 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.368 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:19.368 null0 00:33:19.368 [2024-07-13 08:20:10.959475] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:19.368 [2024-07-13 08:20:10.983687] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:19.368 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.368 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:19.368 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:19.368 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:19.368 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:19.368 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:19.368 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2106342 00:33:19.368 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:19.368 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2106342 /var/tmp/bperf.sock 00:33:19.368 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2106342 ']' 00:33:19.368 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:19.368 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:19.368 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:19.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:19.368 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:19.369 08:20:10 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:19.369 [2024-07-13 08:20:11.026945] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:19.369 [2024-07-13 08:20:11.027027] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2106342 ] 00:33:19.369 EAL: No free 2048 kB hugepages reported on node 1 00:33:19.369 [2024-07-13 08:20:11.086995] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:19.627 [2024-07-13 08:20:11.172001] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:19.627 08:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:19.627 08:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:19.627 08:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:19.627 08:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:19.885 08:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:19.885 08:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.885 08:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:19.885 08:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.885 08:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:19.885 08:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:20.143 nvme0n1 00:33:20.143 08:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:20.143 08:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.143 08:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:20.143 08:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.143 08:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:20.143 08:20:11 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:20.403 Running I/O for 2 seconds... 00:33:20.403 [2024-07-13 08:20:11.960098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.403 [2024-07-13 08:20:11.960164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.403 [2024-07-13 08:20:11.960186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.403 [2024-07-13 08:20:11.976120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.403 [2024-07-13 08:20:11.976151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.403 [2024-07-13 08:20:11.976184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.403 [2024-07-13 08:20:11.990966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.403 [2024-07-13 08:20:11.990997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.403 [2024-07-13 08:20:11.991029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.403 [2024-07-13 08:20:12.002315] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.403 [2024-07-13 08:20:12.002346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:25301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.403 [2024-07-13 08:20:12.002386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.403 [2024-07-13 08:20:12.018285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.403 [2024-07-13 08:20:12.018328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.403 [2024-07-13 08:20:12.018344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.403 [2024-07-13 08:20:12.032189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.403 [2024-07-13 08:20:12.032218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.403 [2024-07-13 08:20:12.032249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.403 [2024-07-13 08:20:12.046223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.403 [2024-07-13 08:20:12.046258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.403 [2024-07-13 08:20:12.046278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.403 [2024-07-13 08:20:12.061714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.403 [2024-07-13 08:20:12.061744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:13989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.403 [2024-07-13 08:20:12.061775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.403 [2024-07-13 08:20:12.073498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.403 [2024-07-13 08:20:12.073529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.403 [2024-07-13 08:20:12.073560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.403 [2024-07-13 08:20:12.087812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.404 [2024-07-13 08:20:12.087842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.404 [2024-07-13 08:20:12.087885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.404 [2024-07-13 08:20:12.099504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.404 [2024-07-13 08:20:12.099532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.404 [2024-07-13 08:20:12.099563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.404 [2024-07-13 08:20:12.113904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.404 [2024-07-13 08:20:12.113949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.404 [2024-07-13 08:20:12.113965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.404 [2024-07-13 08:20:12.128191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.404 [2024-07-13 08:20:12.128221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.404 [2024-07-13 08:20:12.128252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.661 [2024-07-13 08:20:12.140373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.661 [2024-07-13 08:20:12.140404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.661 [2024-07-13 08:20:12.140437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.661 [2024-07-13 08:20:12.153369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.661 [2024-07-13 08:20:12.153404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.661 [2024-07-13 08:20:12.153422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.661 [2024-07-13 08:20:12.166507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.661 [2024-07-13 08:20:12.166542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:9205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.661 [2024-07-13 08:20:12.166561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.661 [2024-07-13 08:20:12.179790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.661 [2024-07-13 08:20:12.179824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.661 [2024-07-13 08:20:12.179842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.661 [2024-07-13 08:20:12.192727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.661 [2024-07-13 08:20:12.192761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.661 [2024-07-13 08:20:12.192780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.661 [2024-07-13 08:20:12.206135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.661 [2024-07-13 08:20:12.206176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.661 [2024-07-13 08:20:12.206191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.661 [2024-07-13 08:20:12.220289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.661 [2024-07-13 08:20:12.220318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.661 [2024-07-13 08:20:12.220349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.661 [2024-07-13 08:20:12.234205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.661 [2024-07-13 08:20:12.234235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.661 [2024-07-13 08:20:12.234272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.661 [2024-07-13 08:20:12.246003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.661 [2024-07-13 08:20:12.246034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18422 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.661 [2024-07-13 08:20:12.246051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.661 [2024-07-13 08:20:12.259465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.661 [2024-07-13 08:20:12.259499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.661 [2024-07-13 08:20:12.259518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.661 [2024-07-13 08:20:12.273834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.661 [2024-07-13 08:20:12.273875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.661 [2024-07-13 08:20:12.273896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.661 [2024-07-13 08:20:12.286959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.661 [2024-07-13 08:20:12.286989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24093 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.661 [2024-07-13 08:20:12.287006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.661 [2024-07-13 08:20:12.298986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.661 [2024-07-13 08:20:12.299014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.661 [2024-07-13 08:20:12.299044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.661 [2024-07-13 08:20:12.313157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.661 [2024-07-13 08:20:12.313191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.661 [2024-07-13 08:20:12.313209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.661 [2024-07-13 08:20:12.327069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.661 [2024-07-13 08:20:12.327097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.661 [2024-07-13 08:20:12.327128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.661 [2024-07-13 08:20:12.344603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.661 [2024-07-13 08:20:12.344636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.661 [2024-07-13 08:20:12.344654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.661 [2024-07-13 08:20:12.360715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.661 [2024-07-13 08:20:12.360756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.661 [2024-07-13 08:20:12.360775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.661 [2024-07-13 08:20:12.372262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.661 [2024-07-13 08:20:12.372291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:25555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.661 [2024-07-13 08:20:12.372323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.661 [2024-07-13 08:20:12.385186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.661 [2024-07-13 08:20:12.385215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:1284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.661 [2024-07-13 08:20:12.385248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.920 [2024-07-13 08:20:12.399108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.920 [2024-07-13 08:20:12.399155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.920 [2024-07-13 08:20:12.399173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.920 [2024-07-13 08:20:12.414489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.920 [2024-07-13 08:20:12.414524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:14633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.920 [2024-07-13 08:20:12.414543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.920 [2024-07-13 08:20:12.428803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.920 [2024-07-13 08:20:12.428836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.920 [2024-07-13 08:20:12.428855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.920 [2024-07-13 08:20:12.443627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.920 [2024-07-13 08:20:12.443661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.920 [2024-07-13 08:20:12.443680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.920 [2024-07-13 08:20:12.455465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.920 [2024-07-13 08:20:12.455499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:4299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.920 [2024-07-13 08:20:12.455517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.920 [2024-07-13 08:20:12.470833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.920 [2024-07-13 08:20:12.470876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.920 [2024-07-13 08:20:12.470897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.920 [2024-07-13 08:20:12.486029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.920 [2024-07-13 08:20:12.486059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.920 [2024-07-13 08:20:12.486091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.920 [2024-07-13 08:20:12.498841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.920 [2024-07-13 08:20:12.498883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.920 [2024-07-13 08:20:12.498903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.920 [2024-07-13 08:20:12.511456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.920 [2024-07-13 08:20:12.511486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.920 [2024-07-13 08:20:12.511519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.920 [2024-07-13 08:20:12.526889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.920 [2024-07-13 08:20:12.526920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.920 [2024-07-13 08:20:12.526937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.920 [2024-07-13 08:20:12.539406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.920 [2024-07-13 08:20:12.539440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.920 [2024-07-13 08:20:12.539459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.920 [2024-07-13 08:20:12.556221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.920 [2024-07-13 08:20:12.556256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13829 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.920 [2024-07-13 08:20:12.556275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.920 [2024-07-13 08:20:12.570132] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.920 [2024-07-13 08:20:12.570176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7837 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.920 [2024-07-13 08:20:12.570192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.920 [2024-07-13 08:20:12.585809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.920 [2024-07-13 08:20:12.585843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:10874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.920 [2024-07-13 08:20:12.585862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.920 [2024-07-13 08:20:12.597605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.920 [2024-07-13 08:20:12.597640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.920 [2024-07-13 08:20:12.597666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.920 [2024-07-13 08:20:12.611104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.920 [2024-07-13 08:20:12.611134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.920 [2024-07-13 08:20:12.611166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.920 [2024-07-13 08:20:12.624475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.920 [2024-07-13 08:20:12.624508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.920 [2024-07-13 08:20:12.624526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.920 [2024-07-13 08:20:12.638855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.920 [2024-07-13 08:20:12.638891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:6547 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.920 [2024-07-13 08:20:12.638923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:20.920 [2024-07-13 08:20:12.651355] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:20.921 [2024-07-13 08:20:12.651386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:24538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:20.921 [2024-07-13 08:20:12.651419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.180 [2024-07-13 08:20:12.668716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.180 [2024-07-13 08:20:12.668752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:10493 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.180 [2024-07-13 08:20:12.668772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.180 [2024-07-13 08:20:12.686048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.180 [2024-07-13 08:20:12.686077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.180 [2024-07-13 08:20:12.686108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.180 [2024-07-13 08:20:12.700817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.180 [2024-07-13 08:20:12.700849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.180 [2024-07-13 08:20:12.700889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.180 [2024-07-13 08:20:12.712079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.180 [2024-07-13 08:20:12.712108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.180 [2024-07-13 08:20:12.712139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.180 [2024-07-13 08:20:12.727344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.180 [2024-07-13 08:20:12.727386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.180 [2024-07-13 08:20:12.727405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.180 [2024-07-13 08:20:12.742191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.180 [2024-07-13 08:20:12.742222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:15011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.180 [2024-07-13 08:20:12.742238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.180 [2024-07-13 08:20:12.753230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.180 [2024-07-13 08:20:12.753265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.180 [2024-07-13 08:20:12.753283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.180 [2024-07-13 08:20:12.767872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.180 [2024-07-13 08:20:12.767905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.180 [2024-07-13 08:20:12.767938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.180 [2024-07-13 08:20:12.782321] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.180 [2024-07-13 08:20:12.782350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.180 [2024-07-13 08:20:12.782383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.180 [2024-07-13 08:20:12.795530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.180 [2024-07-13 08:20:12.795563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.180 [2024-07-13 08:20:12.795582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.180 [2024-07-13 08:20:12.808794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.180 [2024-07-13 08:20:12.808827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.180 [2024-07-13 08:20:12.808847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.180 [2024-07-13 08:20:12.822848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.180 [2024-07-13 08:20:12.822888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.180 [2024-07-13 08:20:12.822908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.180 [2024-07-13 08:20:12.837986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.180 [2024-07-13 08:20:12.838014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.180 [2024-07-13 08:20:12.838046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.180 [2024-07-13 08:20:12.849782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.180 [2024-07-13 08:20:12.849815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.180 [2024-07-13 08:20:12.849834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.180 [2024-07-13 08:20:12.866088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.180 [2024-07-13 08:20:12.866117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2873 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.180 [2024-07-13 08:20:12.866133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.180 [2024-07-13 08:20:12.878561] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.180 [2024-07-13 08:20:12.878594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.180 [2024-07-13 08:20:12.878613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.180 [2024-07-13 08:20:12.892130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.180 [2024-07-13 08:20:12.892160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.180 [2024-07-13 08:20:12.892176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.180 [2024-07-13 08:20:12.907925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.180 [2024-07-13 08:20:12.907967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:1835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.180 [2024-07-13 08:20:12.907985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.440 [2024-07-13 08:20:12.923653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.440 [2024-07-13 08:20:12.923685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.440 [2024-07-13 08:20:12.923719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.440 [2024-07-13 08:20:12.939041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.440 [2024-07-13 08:20:12.939070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.440 [2024-07-13 08:20:12.939102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.440 [2024-07-13 08:20:12.950855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.440 [2024-07-13 08:20:12.950898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.440 [2024-07-13 08:20:12.950918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.440 [2024-07-13 08:20:12.967707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.440 [2024-07-13 08:20:12.967742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:7561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.440 [2024-07-13 08:20:12.967769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.440 [2024-07-13 08:20:12.980423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.440 [2024-07-13 08:20:12.980456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.440 [2024-07-13 08:20:12.980475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.440 [2024-07-13 08:20:12.994351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.440 [2024-07-13 08:20:12.994395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.440 [2024-07-13 08:20:12.994412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.440 [2024-07-13 08:20:13.006333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.440 [2024-07-13 08:20:13.006364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.440 [2024-07-13 08:20:13.006381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.440 [2024-07-13 08:20:13.019885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.440 [2024-07-13 08:20:13.019928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:14859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.440 [2024-07-13 08:20:13.019944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.440 [2024-07-13 08:20:13.033065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.440 [2024-07-13 08:20:13.033098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.440 [2024-07-13 08:20:13.033116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.440 [2024-07-13 08:20:13.046125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.440 [2024-07-13 08:20:13.046170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.440 [2024-07-13 08:20:13.046186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.440 [2024-07-13 08:20:13.059136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.441 [2024-07-13 08:20:13.059182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.441 [2024-07-13 08:20:13.059201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.441 [2024-07-13 08:20:13.072217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.441 [2024-07-13 08:20:13.072250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.441 [2024-07-13 08:20:13.072268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.441 [2024-07-13 08:20:13.086270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.441 [2024-07-13 08:20:13.086310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.441 [2024-07-13 08:20:13.086329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.441 [2024-07-13 08:20:13.100117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.441 [2024-07-13 08:20:13.100147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.441 [2024-07-13 08:20:13.100179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.441 [2024-07-13 08:20:13.111836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.441 [2024-07-13 08:20:13.111877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.441 [2024-07-13 08:20:13.111898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.441 [2024-07-13 08:20:13.124719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.441 [2024-07-13 08:20:13.124753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.441 [2024-07-13 08:20:13.124771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.441 [2024-07-13 08:20:13.138468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.441 [2024-07-13 08:20:13.138498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:12351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.441 [2024-07-13 08:20:13.138530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.441 [2024-07-13 08:20:13.150020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.441 [2024-07-13 08:20:13.150049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.441 [2024-07-13 08:20:13.150080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.441 [2024-07-13 08:20:13.164724] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.441 [2024-07-13 08:20:13.164758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:8094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.441 [2024-07-13 08:20:13.164776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.705 [2024-07-13 08:20:13.178930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.705 [2024-07-13 08:20:13.178963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:4447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.705 [2024-07-13 08:20:13.178981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.705 [2024-07-13 08:20:13.194350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.705 [2024-07-13 08:20:13.194381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:23140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.705 [2024-07-13 08:20:13.194420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.705 [2024-07-13 08:20:13.206513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.705 [2024-07-13 08:20:13.206547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.705 [2024-07-13 08:20:13.206566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.705 [2024-07-13 08:20:13.220429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.705 [2024-07-13 08:20:13.220464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.705 [2024-07-13 08:20:13.220482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.706 [2024-07-13 08:20:13.233973] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.706 [2024-07-13 08:20:13.234003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.706 [2024-07-13 08:20:13.234034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.706 [2024-07-13 08:20:13.249250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.706 [2024-07-13 08:20:13.249281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.706 [2024-07-13 08:20:13.249313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.706 [2024-07-13 08:20:13.262189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.706 [2024-07-13 08:20:13.262223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.706 [2024-07-13 08:20:13.262242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.706 [2024-07-13 08:20:13.276581] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.706 [2024-07-13 08:20:13.276627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.706 [2024-07-13 08:20:13.276646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.706 [2024-07-13 08:20:13.289943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.706 [2024-07-13 08:20:13.289971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.706 [2024-07-13 08:20:13.290003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.706 [2024-07-13 08:20:13.302858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.706 [2024-07-13 08:20:13.302899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.706 [2024-07-13 08:20:13.302926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.706 [2024-07-13 08:20:13.316738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.706 [2024-07-13 08:20:13.316779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.706 [2024-07-13 08:20:13.316798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.706 [2024-07-13 08:20:13.329026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.706 [2024-07-13 08:20:13.329055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:21804 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.706 [2024-07-13 08:20:13.329088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.706 [2024-07-13 08:20:13.343632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.706 [2024-07-13 08:20:13.343666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.706 [2024-07-13 08:20:13.343685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.706 [2024-07-13 08:20:13.357834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.706 [2024-07-13 08:20:13.357872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.706 [2024-07-13 08:20:13.357892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.706 [2024-07-13 08:20:13.376506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.706 [2024-07-13 08:20:13.376534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.706 [2024-07-13 08:20:13.376565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.706 [2024-07-13 08:20:13.387391] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.706 [2024-07-13 08:20:13.387419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.706 [2024-07-13 08:20:13.387451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.706 [2024-07-13 08:20:13.403149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.706 [2024-07-13 08:20:13.403179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.706 [2024-07-13 08:20:13.403196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.706 [2024-07-13 08:20:13.419254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.706 [2024-07-13 08:20:13.419286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.706 [2024-07-13 08:20:13.419303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.706 [2024-07-13 08:20:13.430330] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.706 [2024-07-13 08:20:13.430374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.706 [2024-07-13 08:20:13.430390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.965 [2024-07-13 08:20:13.445365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.965 [2024-07-13 08:20:13.445402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.965 [2024-07-13 08:20:13.445421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.965 [2024-07-13 08:20:13.459022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.965 [2024-07-13 08:20:13.459053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.965 [2024-07-13 08:20:13.459070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.965 [2024-07-13 08:20:13.472640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.965 [2024-07-13 08:20:13.472674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.965 [2024-07-13 08:20:13.472693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.965 [2024-07-13 08:20:13.485223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.965 [2024-07-13 08:20:13.485267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:4110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.965 [2024-07-13 08:20:13.485285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.965 [2024-07-13 08:20:13.498162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.965 [2024-07-13 08:20:13.498191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.965 [2024-07-13 08:20:13.498207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.965 [2024-07-13 08:20:13.513721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.965 [2024-07-13 08:20:13.513751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.965 [2024-07-13 08:20:13.513782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.965 [2024-07-13 08:20:13.525661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.965 [2024-07-13 08:20:13.525706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.965 [2024-07-13 08:20:13.525725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.965 [2024-07-13 08:20:13.539589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.965 [2024-07-13 08:20:13.539618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.965 [2024-07-13 08:20:13.539650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.965 [2024-07-13 08:20:13.554637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.965 [2024-07-13 08:20:13.554666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16182 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.965 [2024-07-13 08:20:13.554705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.965 [2024-07-13 08:20:13.566221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.965 [2024-07-13 08:20:13.566269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.965 [2024-07-13 08:20:13.566285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.965 [2024-07-13 08:20:13.581797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.965 [2024-07-13 08:20:13.581831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.965 [2024-07-13 08:20:13.581850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.965 [2024-07-13 08:20:13.594449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.965 [2024-07-13 08:20:13.594482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.965 [2024-07-13 08:20:13.594500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.965 [2024-07-13 08:20:13.607739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.965 [2024-07-13 08:20:13.607774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.965 [2024-07-13 08:20:13.607793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.965 [2024-07-13 08:20:13.622515] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.965 [2024-07-13 08:20:13.622548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.965 [2024-07-13 08:20:13.622567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.965 [2024-07-13 08:20:13.636002] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.965 [2024-07-13 08:20:13.636046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:25327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.965 [2024-07-13 08:20:13.636063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.965 [2024-07-13 08:20:13.647899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.965 [2024-07-13 08:20:13.647945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.965 [2024-07-13 08:20:13.647961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.966 [2024-07-13 08:20:13.662035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.966 [2024-07-13 08:20:13.662064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.966 [2024-07-13 08:20:13.662097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.966 [2024-07-13 08:20:13.674987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.966 [2024-07-13 08:20:13.675020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.966 [2024-07-13 08:20:13.675053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:21.966 [2024-07-13 08:20:13.687883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:21.966 [2024-07-13 08:20:13.687929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:21.966 [2024-07-13 08:20:13.687944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.224 [2024-07-13 08:20:13.701351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:22.224 [2024-07-13 08:20:13.701387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:8072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.224 [2024-07-13 08:20:13.701406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.224 [2024-07-13 08:20:13.714392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:22.224 [2024-07-13 08:20:13.714422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.224 [2024-07-13 08:20:13.714454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.224 [2024-07-13 08:20:13.729149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:22.224 [2024-07-13 08:20:13.729178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.224 [2024-07-13 08:20:13.729209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.224 [2024-07-13 08:20:13.741723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:22.224 [2024-07-13 08:20:13.741756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.224 [2024-07-13 08:20:13.741774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.224 [2024-07-13 08:20:13.753903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:22.224 [2024-07-13 08:20:13.753947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.224 [2024-07-13 08:20:13.753962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.224 [2024-07-13 08:20:13.768161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:22.224 [2024-07-13 08:20:13.768189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.224 [2024-07-13 08:20:13.768221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.224 [2024-07-13 08:20:13.780135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:22.224 [2024-07-13 08:20:13.780162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:13553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.224 [2024-07-13 08:20:13.780178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.224 [2024-07-13 08:20:13.794141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:22.224 [2024-07-13 08:20:13.794169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.224 [2024-07-13 08:20:13.794200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.224 [2024-07-13 08:20:13.806825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:22.224 [2024-07-13 08:20:13.806851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.224 [2024-07-13 08:20:13.806891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.224 [2024-07-13 08:20:13.820116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:22.224 [2024-07-13 08:20:13.820144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.224 [2024-07-13 08:20:13.820175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.224 [2024-07-13 08:20:13.834519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:22.224 [2024-07-13 08:20:13.834553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:9777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.224 [2024-07-13 08:20:13.834572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.224 [2024-07-13 08:20:13.846045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:22.224 [2024-07-13 08:20:13.846073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.224 [2024-07-13 08:20:13.846104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.224 [2024-07-13 08:20:13.861650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:22.224 [2024-07-13 08:20:13.861694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25259 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.224 [2024-07-13 08:20:13.861710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.224 [2024-07-13 08:20:13.873176] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:22.224 [2024-07-13 08:20:13.873220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.224 [2024-07-13 08:20:13.873240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.224 [2024-07-13 08:20:13.889658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:22.224 [2024-07-13 08:20:13.889692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.224 [2024-07-13 08:20:13.889710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.224 [2024-07-13 08:20:13.903455] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:22.224 [2024-07-13 08:20:13.903485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:4792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.224 [2024-07-13 08:20:13.903523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.224 [2024-07-13 08:20:13.915215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:22.224 [2024-07-13 08:20:13.915249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.224 [2024-07-13 08:20:13.915268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.224 [2024-07-13 08:20:13.930214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:22.224 [2024-07-13 08:20:13.930246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.224 [2024-07-13 08:20:13.930265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.224 [2024-07-13 08:20:13.942882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x6b39d0) 00:33:22.224 [2024-07-13 08:20:13.942910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:22.224 [2024-07-13 08:20:13.942941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:22.481 00:33:22.481 Latency(us) 00:33:22.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.481 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:22.481 nvme0n1 : 2.05 18112.41 70.75 0.00 0.00 6918.86 3713.71 46991.74 00:33:22.481 =================================================================================================================== 00:33:22.481 Total : 18112.41 70.75 0.00 0.00 6918.86 3713.71 46991.74 00:33:22.481 0 00:33:22.481 08:20:13 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:22.481 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:22.481 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:22.481 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:22.481 | .driver_specific 00:33:22.481 | .nvme_error 00:33:22.481 | .status_code 00:33:22.481 | .command_transient_transport_error' 00:33:22.738 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:33:22.738 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2106342 00:33:22.738 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2106342 ']' 00:33:22.738 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2106342 00:33:22.738 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:22.738 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:22.738 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2106342 00:33:22.738 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:22.738 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:22.738 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2106342' 00:33:22.738 killing process with pid 2106342 00:33:22.738 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2106342 00:33:22.738 Received shutdown signal, test time was about 2.000000 seconds 00:33:22.738 00:33:22.738 Latency(us) 00:33:22.738 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:22.738 =================================================================================================================== 00:33:22.738 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:22.738 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2106342 00:33:22.995 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:22.995 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:22.995 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:22.995 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:22.995 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:22.995 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2106746 00:33:22.995 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:22.995 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2106746 /var/tmp/bperf.sock 00:33:22.995 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2106746 ']' 00:33:22.995 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:22.995 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:22.995 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:22.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:22.995 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:22.995 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:22.995 [2024-07-13 08:20:14.540429] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:22.995 [2024-07-13 08:20:14.540520] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2106746 ] 00:33:22.995 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:22.995 Zero copy mechanism will not be used. 00:33:22.995 EAL: No free 2048 kB hugepages reported on node 1 00:33:22.995 [2024-07-13 08:20:14.605583] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:22.995 [2024-07-13 08:20:14.691815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:23.253 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:23.253 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:23.253 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:23.253 08:20:14 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:23.509 08:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:23.509 08:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.509 08:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:23.509 08:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.509 08:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:23.509 08:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:23.766 nvme0n1 00:33:23.766 08:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:23.766 08:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:23.766 08:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:23.766 08:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:23.766 08:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:23.766 08:20:15 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:23.766 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:23.766 Zero copy mechanism will not be used. 00:33:23.766 Running I/O for 2 seconds... 00:33:24.024 [2024-07-13 08:20:15.503757] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.024 [2024-07-13 08:20:15.503814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.024 [2024-07-13 08:20:15.503835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.024 [2024-07-13 08:20:15.513493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.024 [2024-07-13 08:20:15.513541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.024 [2024-07-13 08:20:15.513561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.522679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.522707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.522740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.531966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.531997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.532014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.541137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.541193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.541212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.550479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.550509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.550525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.559829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.559876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.559895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.568647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.568690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.568707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.577450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.577479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.577495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.586403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.586446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.586461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.595881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.595913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.595946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.605017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.605047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.605079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.614098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.614126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.614158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.623407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.623439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.623457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.632876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.632908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.632932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.642200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.642255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.642274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.651470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.651504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.651522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.660705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.660737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.660756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.669617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.669649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.669666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.678491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.678523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.678541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.687718] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.687745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.687760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.697119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.697163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.697182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.706331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.706378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.706396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.715531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.715577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.715595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.724542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.724570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.724602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.733426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.733455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.733486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.742274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.742302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.742333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.025 [2024-07-13 08:20:15.751029] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.025 [2024-07-13 08:20:15.751056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.025 [2024-07-13 08:20:15.751087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.759933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.759966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.759984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.768942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.768972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.769003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.778026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.778055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.778072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.787275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.787307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.787331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.796438] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.796470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.796487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.805791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.805823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.805841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.814922] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.814950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.814981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.823988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.824017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.824033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.833173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.833204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.833223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.842423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.842455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.842473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.851477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.851508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.851526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.860212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.860240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.860271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.868959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.868991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.869023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.877863] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.877905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.877923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.886829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.886861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.886889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.895928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.895956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.895971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.904936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.904979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.904994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.913647] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.913679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.913697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.922663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.922690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.922706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.931729] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.931772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.931788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.940617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.940645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.940676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.949623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.949655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.949673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.958361] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.285 [2024-07-13 08:20:15.958388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.285 [2024-07-13 08:20:15.958420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.285 [2024-07-13 08:20:15.967034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.286 [2024-07-13 08:20:15.967063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.286 [2024-07-13 08:20:15.967078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.286 [2024-07-13 08:20:15.975858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.286 [2024-07-13 08:20:15.975898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.286 [2024-07-13 08:20:15.975916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.286 [2024-07-13 08:20:15.984711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.286 [2024-07-13 08:20:15.984754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.286 [2024-07-13 08:20:15.984769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.286 [2024-07-13 08:20:15.993483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.286 [2024-07-13 08:20:15.993515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.286 [2024-07-13 08:20:15.993533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.286 [2024-07-13 08:20:16.002420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.286 [2024-07-13 08:20:16.002463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.286 [2024-07-13 08:20:16.002479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.286 [2024-07-13 08:20:16.011256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.286 [2024-07-13 08:20:16.011298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.286 [2024-07-13 08:20:16.011314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.545 [2024-07-13 08:20:16.020153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.545 [2024-07-13 08:20:16.020183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.545 [2024-07-13 08:20:16.020221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.545 [2024-07-13 08:20:16.029117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.545 [2024-07-13 08:20:16.029147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.545 [2024-07-13 08:20:16.029180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.545 [2024-07-13 08:20:16.037911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.545 [2024-07-13 08:20:16.037940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.545 [2024-07-13 08:20:16.037957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.545 [2024-07-13 08:20:16.046823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.545 [2024-07-13 08:20:16.046855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.545 [2024-07-13 08:20:16.046883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.545 [2024-07-13 08:20:16.055739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.545 [2024-07-13 08:20:16.055771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.545 [2024-07-13 08:20:16.055789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.545 [2024-07-13 08:20:16.064677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.545 [2024-07-13 08:20:16.064708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.545 [2024-07-13 08:20:16.064726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.545 [2024-07-13 08:20:16.073592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.545 [2024-07-13 08:20:16.073625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.545 [2024-07-13 08:20:16.073644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.545 [2024-07-13 08:20:16.082220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.545 [2024-07-13 08:20:16.082253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.545 [2024-07-13 08:20:16.082271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.545 [2024-07-13 08:20:16.091071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.545 [2024-07-13 08:20:16.091104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.545 [2024-07-13 08:20:16.091122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.545 [2024-07-13 08:20:16.099930] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.545 [2024-07-13 08:20:16.099959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.545 [2024-07-13 08:20:16.099974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.545 [2024-07-13 08:20:16.108734] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.545 [2024-07-13 08:20:16.108762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.545 [2024-07-13 08:20:16.108793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.545 [2024-07-13 08:20:16.117605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.545 [2024-07-13 08:20:16.117638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.545 [2024-07-13 08:20:16.117656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.545 [2024-07-13 08:20:16.126431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.545 [2024-07-13 08:20:16.126463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.545 [2024-07-13 08:20:16.126481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.545 [2024-07-13 08:20:16.135293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.545 [2024-07-13 08:20:16.135320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.545 [2024-07-13 08:20:16.135351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.545 [2024-07-13 08:20:16.144158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.545 [2024-07-13 08:20:16.144185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.545 [2024-07-13 08:20:16.144216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.545 [2024-07-13 08:20:16.153006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.546 [2024-07-13 08:20:16.153034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.546 [2024-07-13 08:20:16.153065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.546 [2024-07-13 08:20:16.161842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.546 [2024-07-13 08:20:16.161883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.546 [2024-07-13 08:20:16.161904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.546 [2024-07-13 08:20:16.171130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.546 [2024-07-13 08:20:16.171163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.546 [2024-07-13 08:20:16.171187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.546 [2024-07-13 08:20:16.180046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.546 [2024-07-13 08:20:16.180086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.546 [2024-07-13 08:20:16.180104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.546 [2024-07-13 08:20:16.189504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.546 [2024-07-13 08:20:16.189533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.546 [2024-07-13 08:20:16.189564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.546 [2024-07-13 08:20:16.198497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.546 [2024-07-13 08:20:16.198525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.546 [2024-07-13 08:20:16.198541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.546 [2024-07-13 08:20:16.207445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.546 [2024-07-13 08:20:16.207487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.546 [2024-07-13 08:20:16.207503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.546 [2024-07-13 08:20:16.216309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.546 [2024-07-13 08:20:16.216336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.546 [2024-07-13 08:20:16.216367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.546 [2024-07-13 08:20:16.225244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.546 [2024-07-13 08:20:16.225275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.546 [2024-07-13 08:20:16.225293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.546 [2024-07-13 08:20:16.234041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.546 [2024-07-13 08:20:16.234069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.546 [2024-07-13 08:20:16.234100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.546 [2024-07-13 08:20:16.242960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.546 [2024-07-13 08:20:16.242989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.546 [2024-07-13 08:20:16.243006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.546 [2024-07-13 08:20:16.251688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.546 [2024-07-13 08:20:16.251725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.546 [2024-07-13 08:20:16.251744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.546 [2024-07-13 08:20:16.260534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.546 [2024-07-13 08:20:16.260562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.546 [2024-07-13 08:20:16.260594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.546 [2024-07-13 08:20:16.269399] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.546 [2024-07-13 08:20:16.269429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.546 [2024-07-13 08:20:16.269461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.805 [2024-07-13 08:20:16.278188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.805 [2024-07-13 08:20:16.278219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.805 [2024-07-13 08:20:16.278251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.805 [2024-07-13 08:20:16.287693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.805 [2024-07-13 08:20:16.287739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.805 [2024-07-13 08:20:16.287756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.805 [2024-07-13 08:20:16.296655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.805 [2024-07-13 08:20:16.296683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.805 [2024-07-13 08:20:16.296716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.805 [2024-07-13 08:20:16.305392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.805 [2024-07-13 08:20:16.305418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.805 [2024-07-13 08:20:16.305449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.805 [2024-07-13 08:20:16.314167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.805 [2024-07-13 08:20:16.314195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.805 [2024-07-13 08:20:16.314227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.805 [2024-07-13 08:20:16.323023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.805 [2024-07-13 08:20:16.323051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.805 [2024-07-13 08:20:16.323082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.805 [2024-07-13 08:20:16.331967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.805 [2024-07-13 08:20:16.331996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.805 [2024-07-13 08:20:16.332012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.805 [2024-07-13 08:20:16.340622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.805 [2024-07-13 08:20:16.340650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.805 [2024-07-13 08:20:16.340681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.805 [2024-07-13 08:20:16.349498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.805 [2024-07-13 08:20:16.349541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.805 [2024-07-13 08:20:16.349557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.805 [2024-07-13 08:20:16.358269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.805 [2024-07-13 08:20:16.358297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.805 [2024-07-13 08:20:16.358328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.805 [2024-07-13 08:20:16.367020] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.805 [2024-07-13 08:20:16.367063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.805 [2024-07-13 08:20:16.367079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.805 [2024-07-13 08:20:16.375960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.805 [2024-07-13 08:20:16.375988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.805 [2024-07-13 08:20:16.376019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.805 [2024-07-13 08:20:16.384827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.805 [2024-07-13 08:20:16.384855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.805 [2024-07-13 08:20:16.384897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.805 [2024-07-13 08:20:16.393650] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.805 [2024-07-13 08:20:16.393693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.805 [2024-07-13 08:20:16.393709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.805 [2024-07-13 08:20:16.402470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.805 [2024-07-13 08:20:16.402502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.805 [2024-07-13 08:20:16.402525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.805 [2024-07-13 08:20:16.411427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.805 [2024-07-13 08:20:16.411460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.805 [2024-07-13 08:20:16.411478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.805 [2024-07-13 08:20:16.420223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.805 [2024-07-13 08:20:16.420251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.805 [2024-07-13 08:20:16.420282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.805 [2024-07-13 08:20:16.428997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.805 [2024-07-13 08:20:16.429039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.805 [2024-07-13 08:20:16.429054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.806 [2024-07-13 08:20:16.437827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.806 [2024-07-13 08:20:16.437859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.806 [2024-07-13 08:20:16.437886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.806 [2024-07-13 08:20:16.446690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.806 [2024-07-13 08:20:16.446721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.806 [2024-07-13 08:20:16.446739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.806 [2024-07-13 08:20:16.455536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.806 [2024-07-13 08:20:16.455568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.806 [2024-07-13 08:20:16.455585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.806 [2024-07-13 08:20:16.464493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.806 [2024-07-13 08:20:16.464521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.806 [2024-07-13 08:20:16.464553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.806 [2024-07-13 08:20:16.473306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.806 [2024-07-13 08:20:16.473334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.806 [2024-07-13 08:20:16.473364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.806 [2024-07-13 08:20:16.482319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.806 [2024-07-13 08:20:16.482347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.806 [2024-07-13 08:20:16.482379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.806 [2024-07-13 08:20:16.491320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.806 [2024-07-13 08:20:16.491348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.806 [2024-07-13 08:20:16.491363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.806 [2024-07-13 08:20:16.500140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.806 [2024-07-13 08:20:16.500168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.806 [2024-07-13 08:20:16.500199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:24.806 [2024-07-13 08:20:16.509344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.806 [2024-07-13 08:20:16.509372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.806 [2024-07-13 08:20:16.509405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:24.806 [2024-07-13 08:20:16.518137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.806 [2024-07-13 08:20:16.518183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.806 [2024-07-13 08:20:16.518200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:24.806 [2024-07-13 08:20:16.526968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.806 [2024-07-13 08:20:16.526995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.806 [2024-07-13 08:20:16.527026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:24.806 [2024-07-13 08:20:16.535720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:24.806 [2024-07-13 08:20:16.535753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:24.806 [2024-07-13 08:20:16.535786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.065 [2024-07-13 08:20:16.544720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.065 [2024-07-13 08:20:16.544751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.065 [2024-07-13 08:20:16.544783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.065 [2024-07-13 08:20:16.553491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.065 [2024-07-13 08:20:16.553521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.065 [2024-07-13 08:20:16.553542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.065 [2024-07-13 08:20:16.562426] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.065 [2024-07-13 08:20:16.562460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.065 [2024-07-13 08:20:16.562478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.065 [2024-07-13 08:20:16.571698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.065 [2024-07-13 08:20:16.571732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.065 [2024-07-13 08:20:16.571752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.065 [2024-07-13 08:20:16.580854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.065 [2024-07-13 08:20:16.580918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.065 [2024-07-13 08:20:16.580935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.065 [2024-07-13 08:20:16.589829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.065 [2024-07-13 08:20:16.589860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.065 [2024-07-13 08:20:16.589889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.065 [2024-07-13 08:20:16.598563] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.065 [2024-07-13 08:20:16.598594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.065 [2024-07-13 08:20:16.598611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.065 [2024-07-13 08:20:16.607424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.065 [2024-07-13 08:20:16.607468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.065 [2024-07-13 08:20:16.607483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.065 [2024-07-13 08:20:16.616876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.065 [2024-07-13 08:20:16.616923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.065 [2024-07-13 08:20:16.616940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.065 [2024-07-13 08:20:16.625848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.065 [2024-07-13 08:20:16.625890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.065 [2024-07-13 08:20:16.625908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.065 [2024-07-13 08:20:16.634906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.065 [2024-07-13 08:20:16.634940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.065 [2024-07-13 08:20:16.634958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.065 [2024-07-13 08:20:16.643833] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.065 [2024-07-13 08:20:16.643885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.065 [2024-07-13 08:20:16.643903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.065 [2024-07-13 08:20:16.652725] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.065 [2024-07-13 08:20:16.652758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.065 [2024-07-13 08:20:16.652776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.065 [2024-07-13 08:20:16.661529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.065 [2024-07-13 08:20:16.661559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.065 [2024-07-13 08:20:16.661575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.065 [2024-07-13 08:20:16.670141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.065 [2024-07-13 08:20:16.670194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.065 [2024-07-13 08:20:16.670211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.065 [2024-07-13 08:20:16.678897] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.065 [2024-07-13 08:20:16.678939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.065 [2024-07-13 08:20:16.678955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.066 [2024-07-13 08:20:16.687694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.066 [2024-07-13 08:20:16.687734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.066 [2024-07-13 08:20:16.687750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.066 [2024-07-13 08:20:16.696567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.066 [2024-07-13 08:20:16.696596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.066 [2024-07-13 08:20:16.696629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.066 [2024-07-13 08:20:16.705389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.066 [2024-07-13 08:20:16.705418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.066 [2024-07-13 08:20:16.705434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.066 [2024-07-13 08:20:16.714689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.066 [2024-07-13 08:20:16.714719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.066 [2024-07-13 08:20:16.714752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.066 [2024-07-13 08:20:16.723466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.066 [2024-07-13 08:20:16.723495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.066 [2024-07-13 08:20:16.723526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.066 [2024-07-13 08:20:16.732190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.066 [2024-07-13 08:20:16.732235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.066 [2024-07-13 08:20:16.732251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.066 [2024-07-13 08:20:16.740974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.066 [2024-07-13 08:20:16.741003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.066 [2024-07-13 08:20:16.741019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.066 [2024-07-13 08:20:16.749743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.066 [2024-07-13 08:20:16.749772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.066 [2024-07-13 08:20:16.749804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.066 [2024-07-13 08:20:16.758384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.066 [2024-07-13 08:20:16.758413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.066 [2024-07-13 08:20:16.758429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.066 [2024-07-13 08:20:16.767182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.066 [2024-07-13 08:20:16.767210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.066 [2024-07-13 08:20:16.767242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.066 [2024-07-13 08:20:16.776086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.066 [2024-07-13 08:20:16.776114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.066 [2024-07-13 08:20:16.776146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.066 [2024-07-13 08:20:16.785192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.066 [2024-07-13 08:20:16.785239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.066 [2024-07-13 08:20:16.785264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.066 [2024-07-13 08:20:16.794518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.066 [2024-07-13 08:20:16.794553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.066 [2024-07-13 08:20:16.794572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.803994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.804040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.804056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.813405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.813438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.813456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.822535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.822568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.822586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.831785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.831817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.831835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.840962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.840991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.841023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.850351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.850384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.850402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.859518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.859550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.859568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.868743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.868775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.868793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.878098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.878126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.878142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.887450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.887482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.887499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.896846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.896887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.896906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.905968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.905996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.906029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.915350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.915382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.915399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.924479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.924522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.924538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.933774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.933805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.933823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.943115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.943158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.943178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.952418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.952450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.952468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.961639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.961670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.961689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.970982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.971010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.971042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.980291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.980323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.980341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.989597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.989628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.989647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.325 [2024-07-13 08:20:16.998837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.325 [2024-07-13 08:20:16.998876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.325 [2024-07-13 08:20:16.998896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.326 [2024-07-13 08:20:17.008219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.326 [2024-07-13 08:20:17.008252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.326 [2024-07-13 08:20:17.008270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.326 [2024-07-13 08:20:17.017221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.326 [2024-07-13 08:20:17.017263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.326 [2024-07-13 08:20:17.017278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.326 [2024-07-13 08:20:17.026200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.326 [2024-07-13 08:20:17.026247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.326 [2024-07-13 08:20:17.026264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.326 [2024-07-13 08:20:17.035172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.326 [2024-07-13 08:20:17.035204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.326 [2024-07-13 08:20:17.035222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.326 [2024-07-13 08:20:17.044255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.326 [2024-07-13 08:20:17.044284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.326 [2024-07-13 08:20:17.044300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.326 [2024-07-13 08:20:17.053317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.326 [2024-07-13 08:20:17.053356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.326 [2024-07-13 08:20:17.053386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.062161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.062191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.062223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.071014] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.071043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.071074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.079937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.079966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.079983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.088585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.088614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.088630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.097488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.097520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.097538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.106594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.106626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.106644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.115511] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.115539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.115570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.124291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.124336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.124352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.133185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.133213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.133244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.141859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.141899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.141917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.150580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.150623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.150639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.159459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.159491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.159509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.168491] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.168523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.168540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.177742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.177775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.177799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.187115] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.187159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.187178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.196220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.196247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.196279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.205244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.205286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.205301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.214254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.214282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.214298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.223106] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.223134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.223166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.231824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.231851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.231892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.240671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.240702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.240720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.249628] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.249656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.249687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.258424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.258452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.258468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.267224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.267251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.267267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.276213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.276242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.276258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.285607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.285636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.285667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.294429] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.294458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.294489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.585 [2024-07-13 08:20:17.303495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.585 [2024-07-13 08:20:17.303527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.585 [2024-07-13 08:20:17.303545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.586 [2024-07-13 08:20:17.312190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.586 [2024-07-13 08:20:17.312232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.586 [2024-07-13 08:20:17.312247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.845 [2024-07-13 08:20:17.321126] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.845 [2024-07-13 08:20:17.321172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.845 [2024-07-13 08:20:17.321189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.845 [2024-07-13 08:20:17.330740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.845 [2024-07-13 08:20:17.330772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.845 [2024-07-13 08:20:17.330793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.845 [2024-07-13 08:20:17.341205] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.845 [2024-07-13 08:20:17.341234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.845 [2024-07-13 08:20:17.341251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.845 [2024-07-13 08:20:17.351201] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.845 [2024-07-13 08:20:17.351231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.845 [2024-07-13 08:20:17.351264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.845 [2024-07-13 08:20:17.361422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.845 [2024-07-13 08:20:17.361451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.845 [2024-07-13 08:20:17.361484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.845 [2024-07-13 08:20:17.371163] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.845 [2024-07-13 08:20:17.371193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.845 [2024-07-13 08:20:17.371226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.845 [2024-07-13 08:20:17.381246] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.845 [2024-07-13 08:20:17.381277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.845 [2024-07-13 08:20:17.381309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.845 [2024-07-13 08:20:17.391242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.845 [2024-07-13 08:20:17.391272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.845 [2024-07-13 08:20:17.391305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.845 [2024-07-13 08:20:17.400428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.845 [2024-07-13 08:20:17.400457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.845 [2024-07-13 08:20:17.400489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.845 [2024-07-13 08:20:17.410987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.845 [2024-07-13 08:20:17.411018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.845 [2024-07-13 08:20:17.411035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.845 [2024-07-13 08:20:17.421026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.845 [2024-07-13 08:20:17.421061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.845 [2024-07-13 08:20:17.421079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.845 [2024-07-13 08:20:17.431156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.845 [2024-07-13 08:20:17.431186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.845 [2024-07-13 08:20:17.431216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.845 [2024-07-13 08:20:17.441237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.845 [2024-07-13 08:20:17.441281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.845 [2024-07-13 08:20:17.441298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.845 [2024-07-13 08:20:17.451114] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.845 [2024-07-13 08:20:17.451159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.845 [2024-07-13 08:20:17.451175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.845 [2024-07-13 08:20:17.461316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.845 [2024-07-13 08:20:17.461345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.845 [2024-07-13 08:20:17.461361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:25.845 [2024-07-13 08:20:17.471179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.845 [2024-07-13 08:20:17.471209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.845 [2024-07-13 08:20:17.471241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:25.845 [2024-07-13 08:20:17.480347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.845 [2024-07-13 08:20:17.480376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.845 [2024-07-13 08:20:17.480392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:25.845 [2024-07-13 08:20:17.490165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f7e3d0) 00:33:25.845 [2024-07-13 08:20:17.490196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:25.845 [2024-07-13 08:20:17.490213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:25.845 00:33:25.845 Latency(us) 00:33:25.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:25.845 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:25.845 nvme0n1 : 2.00 3416.95 427.12 0.00 0.00 4677.73 4174.89 10485.76 00:33:25.845 =================================================================================================================== 00:33:25.845 Total : 3416.95 427.12 0.00 0.00 4677.73 4174.89 10485.76 00:33:25.845 0 00:33:25.845 08:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:25.845 08:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:25.845 08:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:25.845 08:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:25.845 | .driver_specific 00:33:25.845 | .nvme_error 00:33:25.845 | .status_code 00:33:25.845 | .command_transient_transport_error' 00:33:26.104 08:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 220 > 0 )) 00:33:26.104 08:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2106746 00:33:26.104 08:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2106746 ']' 00:33:26.104 08:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2106746 00:33:26.104 08:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:26.104 08:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:26.104 08:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2106746 00:33:26.104 08:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:26.104 08:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:26.104 08:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2106746' 00:33:26.104 killing process with pid 2106746 00:33:26.104 08:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2106746 00:33:26.104 Received shutdown signal, test time was about 2.000000 seconds 00:33:26.104 00:33:26.104 Latency(us) 00:33:26.104 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.104 =================================================================================================================== 00:33:26.104 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:26.104 08:20:17 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2106746 00:33:26.362 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:26.362 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:26.362 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:26.362 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:26.362 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:26.362 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2107157 00:33:26.362 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:26.362 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2107157 /var/tmp/bperf.sock 00:33:26.362 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2107157 ']' 00:33:26.362 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:26.362 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:26.362 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:26.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:26.362 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:26.363 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:26.363 [2024-07-13 08:20:18.046374] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:26.363 [2024-07-13 08:20:18.046456] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2107157 ] 00:33:26.363 EAL: No free 2048 kB hugepages reported on node 1 00:33:26.621 [2024-07-13 08:20:18.108513] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.621 [2024-07-13 08:20:18.198520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.621 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:26.621 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:26.621 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:26.621 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:26.880 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:26.880 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:26.880 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:26.880 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:26.880 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:26.880 08:20:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:27.447 nvme0n1 00:33:27.447 08:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:27.447 08:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:27.447 08:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:27.447 08:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:27.447 08:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:27.447 08:20:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:27.447 Running I/O for 2 seconds... 00:33:27.447 [2024-07-13 08:20:19.141842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190ed0b0 00:33:27.447 [2024-07-13 08:20:19.142993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.447 [2024-07-13 08:20:19.143029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:27.447 [2024-07-13 08:20:19.154054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f96f8 00:33:27.447 [2024-07-13 08:20:19.155140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.447 [2024-07-13 08:20:19.155170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:27.447 [2024-07-13 08:20:19.167293] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e23b8 00:33:27.447 [2024-07-13 08:20:19.168534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:9443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.447 [2024-07-13 08:20:19.168566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:27.706 [2024-07-13 08:20:19.181178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e3060 00:33:27.706 [2024-07-13 08:20:19.182786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.706 [2024-07-13 08:20:19.182818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:27.706 [2024-07-13 08:20:19.194644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e6fa8 00:33:27.706 [2024-07-13 08:20:19.196272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:14645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.706 [2024-07-13 08:20:19.196304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:27.706 [2024-07-13 08:20:19.207783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f96f8 00:33:27.706 [2024-07-13 08:20:19.209532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.706 [2024-07-13 08:20:19.209564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:27.706 [2024-07-13 08:20:19.221056] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190eff18 00:33:27.706 [2024-07-13 08:20:19.223057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.706 [2024-07-13 08:20:19.223086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:27.706 [2024-07-13 08:20:19.234181] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190ebfd0 00:33:27.706 [2024-07-13 08:20:19.236281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.706 [2024-07-13 08:20:19.236312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:27.706 [2024-07-13 08:20:19.243127] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190ea248 00:33:27.706 [2024-07-13 08:20:19.244099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.706 [2024-07-13 08:20:19.244127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:27.706 [2024-07-13 08:20:19.256135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190df550 00:33:27.706 [2024-07-13 08:20:19.257095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:11297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.706 [2024-07-13 08:20:19.257126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:27.706 [2024-07-13 08:20:19.268861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e0630 00:33:27.706 [2024-07-13 08:20:19.269808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.706 [2024-07-13 08:20:19.269844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:27.706 [2024-07-13 08:20:19.283063] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f1430 00:33:27.706 [2024-07-13 08:20:19.284643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.706 [2024-07-13 08:20:19.284674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:27.706 [2024-07-13 08:20:19.296271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190eaef0 00:33:27.706 [2024-07-13 08:20:19.298066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.706 [2024-07-13 08:20:19.298094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:27.706 [2024-07-13 08:20:19.309495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e88f8 00:33:27.706 [2024-07-13 08:20:19.311389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:21584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.706 [2024-07-13 08:20:19.311420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:27.706 [2024-07-13 08:20:19.322833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190fb048 00:33:27.706 [2024-07-13 08:20:19.324932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.706 [2024-07-13 08:20:19.324960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:27.706 [2024-07-13 08:20:19.331774] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f2510 00:33:27.706 [2024-07-13 08:20:19.332667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.706 [2024-07-13 08:20:19.332698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:27.706 [2024-07-13 08:20:19.344975] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f6020 00:33:27.706 [2024-07-13 08:20:19.346083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.706 [2024-07-13 08:20:19.346111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:27.706 [2024-07-13 08:20:19.358297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190ecc78 00:33:27.706 [2024-07-13 08:20:19.359524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.706 [2024-07-13 08:20:19.359555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:27.706 [2024-07-13 08:20:19.370267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f9f68 00:33:27.706 [2024-07-13 08:20:19.371486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.706 [2024-07-13 08:20:19.371516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:27.706 [2024-07-13 08:20:19.384349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e5ec8 00:33:27.706 [2024-07-13 08:20:19.385803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.706 [2024-07-13 08:20:19.385839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:27.706 [2024-07-13 08:20:19.397395] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190edd58 00:33:27.706 [2024-07-13 08:20:19.398840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.706 [2024-07-13 08:20:19.398875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:27.706 [2024-07-13 08:20:19.409086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190fb8b8 00:33:27.706 [2024-07-13 08:20:19.410644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:11706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.706 [2024-07-13 08:20:19.410675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:27.706 [2024-07-13 08:20:19.422295] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190eee38 00:33:27.706 [2024-07-13 08:20:19.424125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.706 [2024-07-13 08:20:19.424152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:27.706 [2024-07-13 08:20:19.435535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e73e0 00:33:27.706 [2024-07-13 08:20:19.437582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.706 [2024-07-13 08:20:19.437613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:27.966 [2024-07-13 08:20:19.449147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f7100 00:33:27.966 [2024-07-13 08:20:19.451242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.966 [2024-07-13 08:20:19.451273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:27.966 [2024-07-13 08:20:19.458065] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f20d8 00:33:27.966 [2024-07-13 08:20:19.459063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.966 [2024-07-13 08:20:19.459092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:27.966 [2024-07-13 08:20:19.470947] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190feb58 00:33:27.966 [2024-07-13 08:20:19.471823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.966 [2024-07-13 08:20:19.471854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:27.966 [2024-07-13 08:20:19.483463] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f46d0 00:33:27.966 [2024-07-13 08:20:19.484459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.966 [2024-07-13 08:20:19.484491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:27.966 [2024-07-13 08:20:19.495317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f6cc8 00:33:27.966 [2024-07-13 08:20:19.496232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.966 [2024-07-13 08:20:19.496263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:27.966 [2024-07-13 08:20:19.508464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190ee190 00:33:27.966 [2024-07-13 08:20:19.509544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.966 [2024-07-13 08:20:19.509575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:27.966 [2024-07-13 08:20:19.521664] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f8a50 00:33:27.966 [2024-07-13 08:20:19.522911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.966 [2024-07-13 08:20:19.522942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:27.966 [2024-07-13 08:20:19.534891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f8e88 00:33:27.966 [2024-07-13 08:20:19.536307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.966 [2024-07-13 08:20:19.536338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:27.966 [2024-07-13 08:20:19.548069] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f0350 00:33:27.966 [2024-07-13 08:20:19.549658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.966 [2024-07-13 08:20:19.549691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:27.966 [2024-07-13 08:20:19.559828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190ee190 00:33:27.966 [2024-07-13 08:20:19.560908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.966 [2024-07-13 08:20:19.560940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:27.966 [2024-07-13 08:20:19.572574] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f96f8 00:33:27.966 [2024-07-13 08:20:19.573477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.966 [2024-07-13 08:20:19.573509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:27.966 [2024-07-13 08:20:19.585757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190fe2e8 00:33:27.966 [2024-07-13 08:20:19.586833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.966 [2024-07-13 08:20:19.586883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:27.966 [2024-07-13 08:20:19.597670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f2510 00:33:27.966 [2024-07-13 08:20:19.599523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.966 [2024-07-13 08:20:19.599555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:27.966 [2024-07-13 08:20:19.608465] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e6300 00:33:27.966 [2024-07-13 08:20:19.609367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:11842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.966 [2024-07-13 08:20:19.609398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:27.966 [2024-07-13 08:20:19.621642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e88f8 00:33:27.966 [2024-07-13 08:20:19.622716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.966 [2024-07-13 08:20:19.622747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:27.966 [2024-07-13 08:20:19.634840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f96f8 00:33:27.966 [2024-07-13 08:20:19.636086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.966 [2024-07-13 08:20:19.636118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:27.966 [2024-07-13 08:20:19.648266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f92c0 00:33:27.966 [2024-07-13 08:20:19.649680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.966 [2024-07-13 08:20:19.649711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:27.966 [2024-07-13 08:20:19.661457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190ea248 00:33:27.966 [2024-07-13 08:20:19.663051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.966 [2024-07-13 08:20:19.663082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:27.966 [2024-07-13 08:20:19.674642] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e88f8 00:33:27.966 [2024-07-13 08:20:19.676486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:11926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.966 [2024-07-13 08:20:19.676517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:27.966 [2024-07-13 08:20:19.687924] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f3e60 00:33:27.966 [2024-07-13 08:20:19.689849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:4225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:27.966 [2024-07-13 08:20:19.689887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:28.225 [2024-07-13 08:20:19.701615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e12d8 00:33:28.225 [2024-07-13 08:20:19.703767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.225 [2024-07-13 08:20:19.703797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:28.225 [2024-07-13 08:20:19.710600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f1430 00:33:28.225 [2024-07-13 08:20:19.711520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.225 [2024-07-13 08:20:19.711557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:28.225 [2024-07-13 08:20:19.722525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190df118 00:33:28.225 [2024-07-13 08:20:19.723432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:21934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.225 [2024-07-13 08:20:19.723462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:28.225 [2024-07-13 08:20:19.735731] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190fb480 00:33:28.225 [2024-07-13 08:20:19.736814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.225 [2024-07-13 08:20:19.736845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:28.225 [2024-07-13 08:20:19.748974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f8a50 00:33:28.225 [2024-07-13 08:20:19.750220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.225 [2024-07-13 08:20:19.750251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:28.225 [2024-07-13 08:20:19.762159] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f8e88 00:33:28.225 [2024-07-13 08:20:19.763583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.225 [2024-07-13 08:20:19.763615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:28.225 [2024-07-13 08:20:19.773945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f81e0 00:33:28.225 [2024-07-13 08:20:19.774836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:23272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.225 [2024-07-13 08:20:19.774872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:28.225 [2024-07-13 08:20:19.786680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e8088 00:33:28.225 [2024-07-13 08:20:19.787418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.225 [2024-07-13 08:20:19.787449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:28.225 [2024-07-13 08:20:19.799853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f9f68 00:33:28.225 [2024-07-13 08:20:19.800761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:22548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.225 [2024-07-13 08:20:19.800792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:28.225 [2024-07-13 08:20:19.813033] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f6cc8 00:33:28.225 [2024-07-13 08:20:19.814106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.225 [2024-07-13 08:20:19.814137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:28.225 [2024-07-13 08:20:19.824907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e8d30 00:33:28.225 [2024-07-13 08:20:19.826753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.225 [2024-07-13 08:20:19.826784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:28.225 [2024-07-13 08:20:19.835706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e6fa8 00:33:28.225 [2024-07-13 08:20:19.836612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.225 [2024-07-13 08:20:19.836643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:28.225 [2024-07-13 08:20:19.848886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190ecc78 00:33:28.225 [2024-07-13 08:20:19.849954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.225 [2024-07-13 08:20:19.849984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:28.225 [2024-07-13 08:20:19.862059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f9f68 00:33:28.225 [2024-07-13 08:20:19.863315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:14204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.225 [2024-07-13 08:20:19.863346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:28.225 [2024-07-13 08:20:19.875251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f9b30 00:33:28.225 [2024-07-13 08:20:19.876685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.225 [2024-07-13 08:20:19.876716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:28.225 [2024-07-13 08:20:19.888436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190eff18 00:33:28.225 [2024-07-13 08:20:19.890032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:4374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.225 [2024-07-13 08:20:19.890063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:28.225 [2024-07-13 08:20:19.901606] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190ecc78 00:33:28.225 [2024-07-13 08:20:19.903384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:18429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.226 [2024-07-13 08:20:19.903415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:28.226 [2024-07-13 08:20:19.914794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f7da8 00:33:28.226 [2024-07-13 08:20:19.916729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.226 [2024-07-13 08:20:19.916760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:28.226 [2024-07-13 08:20:19.927980] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190ee5c8 00:33:28.226 [2024-07-13 08:20:19.930083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.226 [2024-07-13 08:20:19.930114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:28.226 [2024-07-13 08:20:19.936931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f1ca0 00:33:28.226 [2024-07-13 08:20:19.937833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.226 [2024-07-13 08:20:19.937864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:28.226 [2024-07-13 08:20:19.950169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f6458 00:33:28.226 [2024-07-13 08:20:19.951264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.226 [2024-07-13 08:20:19.951296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:28.486 [2024-07-13 08:20:19.962653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190fb480 00:33:28.486 [2024-07-13 08:20:19.963732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:25412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.486 [2024-07-13 08:20:19.963762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:28.486 [2024-07-13 08:20:19.976709] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190de038 00:33:28.486 [2024-07-13 08:20:19.977991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:1969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.486 [2024-07-13 08:20:19.978023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:28.486 [2024-07-13 08:20:19.988479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f9b30 00:33:28.486 [2024-07-13 08:20:19.989726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.486 [2024-07-13 08:20:19.989757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:28.486 [2024-07-13 08:20:20.001722] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f9f68 00:33:28.486 [2024-07-13 08:20:20.003159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.486 [2024-07-13 08:20:20.003191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:28.486 [2024-07-13 08:20:20.015070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190dece0 00:33:28.486 [2024-07-13 08:20:20.016681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.486 [2024-07-13 08:20:20.016714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:28.486 [2024-07-13 08:20:20.030795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190fac10 00:33:28.486 [2024-07-13 08:20:20.032435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.486 [2024-07-13 08:20:20.032473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:28.486 [2024-07-13 08:20:20.043436] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e0a68 00:33:28.486 [2024-07-13 08:20:20.044534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:19530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.486 [2024-07-13 08:20:20.044574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:28.486 [2024-07-13 08:20:20.055501] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e3d08 00:33:28.486 [2024-07-13 08:20:20.057354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.486 [2024-07-13 08:20:20.057386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:28.486 [2024-07-13 08:20:20.066432] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e2c28 00:33:28.486 [2024-07-13 08:20:20.067338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.486 [2024-07-13 08:20:20.067370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:28.486 [2024-07-13 08:20:20.079757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f0788 00:33:28.486 [2024-07-13 08:20:20.080848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.486 [2024-07-13 08:20:20.080888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:28.487 [2024-07-13 08:20:20.093089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190ed0b0 00:33:28.487 [2024-07-13 08:20:20.094349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:14093 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.487 [2024-07-13 08:20:20.094381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:28.487 [2024-07-13 08:20:20.106328] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190ed4e8 00:33:28.487 [2024-07-13 08:20:20.107744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.487 [2024-07-13 08:20:20.107776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:28.487 [2024-07-13 08:20:20.119539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e5220 00:33:28.487 [2024-07-13 08:20:20.121133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:2389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.487 [2024-07-13 08:20:20.121164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:28.487 [2024-07-13 08:20:20.132759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f0788 00:33:28.487 [2024-07-13 08:20:20.134527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.487 [2024-07-13 08:20:20.134558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:28.487 [2024-07-13 08:20:20.145993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f2510 00:33:28.487 [2024-07-13 08:20:20.147927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.487 [2024-07-13 08:20:20.147958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:28.487 [2024-07-13 08:20:20.157786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190fbcf0 00:33:28.487 [2024-07-13 08:20:20.159227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:890 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.487 [2024-07-13 08:20:20.159258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:28.487 [2024-07-13 08:20:20.169288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190feb58 00:33:28.487 [2024-07-13 08:20:20.171155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.487 [2024-07-13 08:20:20.171186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:28.487 [2024-07-13 08:20:20.180135] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e1b48 00:33:28.487 [2024-07-13 08:20:20.181041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.487 [2024-07-13 08:20:20.181072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:28.487 [2024-07-13 08:20:20.193363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f4f40 00:33:28.487 [2024-07-13 08:20:20.194437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:13963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.487 [2024-07-13 08:20:20.194468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:28.487 [2024-07-13 08:20:20.206579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f6cc8 00:33:28.487 [2024-07-13 08:20:20.207830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.487 [2024-07-13 08:20:20.207860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:28.746 [2024-07-13 08:20:20.220212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f6890 00:33:28.746 [2024-07-13 08:20:20.221680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.746 [2024-07-13 08:20:20.221711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:28.746 [2024-07-13 08:20:20.233571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190df988 00:33:28.746 [2024-07-13 08:20:20.235173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.746 [2024-07-13 08:20:20.235203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:28.746 [2024-07-13 08:20:20.246783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f4f40 00:33:28.746 [2024-07-13 08:20:20.248553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:21515 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.746 [2024-07-13 08:20:20.248584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:28.746 [2024-07-13 08:20:20.260011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e3060 00:33:28.746 [2024-07-13 08:20:20.261950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:10816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.746 [2024-07-13 08:20:20.261982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:28.747 [2024-07-13 08:20:20.273247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e5a90 00:33:28.747 [2024-07-13 08:20:20.275355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:11685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.747 [2024-07-13 08:20:20.275387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:28.747 [2024-07-13 08:20:20.282209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f1868 00:33:28.747 [2024-07-13 08:20:20.283125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.747 [2024-07-13 08:20:20.283157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:28.747 [2024-07-13 08:20:20.295431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e84c0 00:33:28.747 [2024-07-13 08:20:20.296524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.747 [2024-07-13 08:20:20.296555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:28.747 [2024-07-13 08:20:20.307366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f0788 00:33:28.747 [2024-07-13 08:20:20.308439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.747 [2024-07-13 08:20:20.308470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:28.747 [2024-07-13 08:20:20.320563] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f6890 00:33:28.747 [2024-07-13 08:20:20.321810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.747 [2024-07-13 08:20:20.321842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:28.747 [2024-07-13 08:20:20.333763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f6cc8 00:33:28.747 [2024-07-13 08:20:20.335185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.747 [2024-07-13 08:20:20.335217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:28.747 [2024-07-13 08:20:20.346989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f6020 00:33:28.747 [2024-07-13 08:20:20.348576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.747 [2024-07-13 08:20:20.348608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:28.747 [2024-07-13 08:20:20.360184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f0788 00:33:28.747 [2024-07-13 08:20:20.361940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.747 [2024-07-13 08:20:20.361972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:28.747 [2024-07-13 08:20:20.373374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e3498 00:33:28.747 [2024-07-13 08:20:20.375306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:24858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.747 [2024-07-13 08:20:20.375343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:28.747 [2024-07-13 08:20:20.386577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f8a50 00:33:28.747 [2024-07-13 08:20:20.388684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:7597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.747 [2024-07-13 08:20:20.388716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:28.747 [2024-07-13 08:20:20.395521] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190fa3a0 00:33:28.747 [2024-07-13 08:20:20.396424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.747 [2024-07-13 08:20:20.396455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:28.747 [2024-07-13 08:20:20.408320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f3a28 00:33:28.747 [2024-07-13 08:20:20.409251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.747 [2024-07-13 08:20:20.409281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:28.747 [2024-07-13 08:20:20.421325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e88f8 00:33:28.747 [2024-07-13 08:20:20.422415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.747 [2024-07-13 08:20:20.422447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:28.747 [2024-07-13 08:20:20.434524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f2d80 00:33:28.747 [2024-07-13 08:20:20.435782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:8220 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.747 [2024-07-13 08:20:20.435813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:28.747 [2024-07-13 08:20:20.447739] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e8088 00:33:28.747 [2024-07-13 08:20:20.449172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:15956 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.747 [2024-07-13 08:20:20.449203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:28.747 [2024-07-13 08:20:20.459657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190fc128 00:33:28.747 [2024-07-13 08:20:20.461081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.747 [2024-07-13 08:20:20.461112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:28.747 [2024-07-13 08:20:20.472849] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190df988 00:33:28.747 [2024-07-13 08:20:20.474516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:28.747 [2024-07-13 08:20:20.474547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:29.006 [2024-07-13 08:20:20.486645] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190fd640 00:33:29.006 [2024-07-13 08:20:20.488414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.006 [2024-07-13 08:20:20.488450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:29.006 [2024-07-13 08:20:20.499855] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e3060 00:33:29.006 [2024-07-13 08:20:20.501809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22688 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.006 [2024-07-13 08:20:20.501840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:29.006 [2024-07-13 08:20:20.513038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190de470 00:33:29.006 [2024-07-13 08:20:20.515150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.006 [2024-07-13 08:20:20.515182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:29.006 [2024-07-13 08:20:20.521960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e2c28 00:33:29.006 [2024-07-13 08:20:20.522873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.006 [2024-07-13 08:20:20.522903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:29.006 [2024-07-13 08:20:20.535184] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e84c0 00:33:29.006 [2024-07-13 08:20:20.536275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.006 [2024-07-13 08:20:20.536306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:29.006 [2024-07-13 08:20:20.547104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f5378 00:33:29.006 [2024-07-13 08:20:20.548183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.006 [2024-07-13 08:20:20.548213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:29.006 [2024-07-13 08:20:20.560300] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190fc128 00:33:29.006 [2024-07-13 08:20:20.561542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.006 [2024-07-13 08:20:20.561574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:29.006 [2024-07-13 08:20:20.573482] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190fbcf0 00:33:29.006 [2024-07-13 08:20:20.574908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.006 [2024-07-13 08:20:20.574939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:29.006 [2024-07-13 08:20:20.586650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f6020 00:33:29.006 [2024-07-13 08:20:20.588245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:3489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.006 [2024-07-13 08:20:20.588277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:29.006 [2024-07-13 08:20:20.599828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f5378 00:33:29.006 [2024-07-13 08:20:20.601596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:10072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.006 [2024-07-13 08:20:20.601628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:29.006 [2024-07-13 08:20:20.613029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e3498 00:33:29.006 [2024-07-13 08:20:20.614954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.006 [2024-07-13 08:20:20.614986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:29.006 [2024-07-13 08:20:20.626222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f9b30 00:33:29.006 [2024-07-13 08:20:20.628325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:19207 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.006 [2024-07-13 08:20:20.628356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:29.006 [2024-07-13 08:20:20.635163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f4298 00:33:29.006 [2024-07-13 08:20:20.636047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:7832 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.006 [2024-07-13 08:20:20.636077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:29.006 [2024-07-13 08:20:20.647933] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190ec840 00:33:29.006 [2024-07-13 08:20:20.648854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:10118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.006 [2024-07-13 08:20:20.648892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:29.006 [2024-07-13 08:20:20.659896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190ed920 00:33:29.006 [2024-07-13 08:20:20.660799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.006 [2024-07-13 08:20:20.660829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:29.006 [2024-07-13 08:20:20.673096] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190fe720 00:33:29.006 [2024-07-13 08:20:20.674179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:8612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.006 [2024-07-13 08:20:20.674211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:29.006 [2024-07-13 08:20:20.686284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f92c0 00:33:29.006 [2024-07-13 08:20:20.687530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:3474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.006 [2024-07-13 08:20:20.687561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:29.006 [2024-07-13 08:20:20.699480] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f96f8 00:33:29.006 [2024-07-13 08:20:20.701079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.006 [2024-07-13 08:20:20.701112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:29.006 [2024-07-13 08:20:20.712845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190df988 00:33:29.006 [2024-07-13 08:20:20.714443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.006 [2024-07-13 08:20:20.714476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:29.006 [2024-07-13 08:20:20.726047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190fe720 00:33:29.006 [2024-07-13 08:20:20.727808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.006 [2024-07-13 08:20:20.727840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:29.265 [2024-07-13 08:20:20.739619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e6738 00:33:29.265 [2024-07-13 08:20:20.741689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:21758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.266 [2024-07-13 08:20:20.741720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:29.266 [2024-07-13 08:20:20.751588] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f81e0 00:33:29.266 [2024-07-13 08:20:20.753034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.266 [2024-07-13 08:20:20.753064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:29.266 [2024-07-13 08:20:20.764026] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190df118 00:33:29.266 [2024-07-13 08:20:20.765454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.266 [2024-07-13 08:20:20.765485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:29.266 [2024-07-13 08:20:20.778245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190eaef0 00:33:29.266 [2024-07-13 08:20:20.780358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:14728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.266 [2024-07-13 08:20:20.780388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:29.266 [2024-07-13 08:20:20.787256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f57b0 00:33:29.266 [2024-07-13 08:20:20.788191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6841 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.266 [2024-07-13 08:20:20.788222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:29.266 [2024-07-13 08:20:20.800498] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e8d30 00:33:29.266 [2024-07-13 08:20:20.801597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.266 [2024-07-13 08:20:20.801629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:29.266 [2024-07-13 08:20:20.812464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190fda78 00:33:29.266 [2024-07-13 08:20:20.813557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.266 [2024-07-13 08:20:20.813594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:29.266 [2024-07-13 08:20:20.825751] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f7100 00:33:29.266 [2024-07-13 08:20:20.827008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.266 [2024-07-13 08:20:20.827039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:29.266 [2024-07-13 08:20:20.839112] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f7538 00:33:29.266 [2024-07-13 08:20:20.840540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.266 [2024-07-13 08:20:20.840571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:29.266 [2024-07-13 08:20:20.852337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190fb048 00:33:29.266 [2024-07-13 08:20:20.853925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.266 [2024-07-13 08:20:20.853957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:29.266 [2024-07-13 08:20:20.865549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190fda78 00:33:29.266 [2024-07-13 08:20:20.867317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:24188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.266 [2024-07-13 08:20:20.867348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:29.266 [2024-07-13 08:20:20.878760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e6300 00:33:29.266 [2024-07-13 08:20:20.880701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.266 [2024-07-13 08:20:20.880733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:29.266 [2024-07-13 08:20:20.891982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190fef90 00:33:29.266 [2024-07-13 08:20:20.894099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.266 [2024-07-13 08:20:20.894131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:29.266 [2024-07-13 08:20:20.900956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e01f8 00:33:29.266 [2024-07-13 08:20:20.901857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.266 [2024-07-13 08:20:20.901894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:29.266 [2024-07-13 08:20:20.914207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e9168 00:33:29.266 [2024-07-13 08:20:20.915312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:23192 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.266 [2024-07-13 08:20:20.915343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:29.266 [2024-07-13 08:20:20.926190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190fe720 00:33:29.266 [2024-07-13 08:20:20.927272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:14647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.266 [2024-07-13 08:20:20.927303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:29.266 [2024-07-13 08:20:20.939415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f7538 00:33:29.266 [2024-07-13 08:20:20.940665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.266 [2024-07-13 08:20:20.940697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:29.266 [2024-07-13 08:20:20.952622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f7100 00:33:29.266 [2024-07-13 08:20:20.954047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.266 [2024-07-13 08:20:20.954078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:29.266 [2024-07-13 08:20:20.965833] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f1ca0 00:33:29.266 [2024-07-13 08:20:20.967432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.266 [2024-07-13 08:20:20.967465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:29.266 [2024-07-13 08:20:20.979039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190fe720 00:33:29.266 [2024-07-13 08:20:20.980806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:12327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.266 [2024-07-13 08:20:20.980837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:29.266 [2024-07-13 08:20:20.990806] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e6738 00:33:29.266 [2024-07-13 08:20:20.992061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.266 [2024-07-13 08:20:20.992092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:29.525 [2024-07-13 08:20:21.004166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f3e60 00:33:29.525 [2024-07-13 08:20:21.005263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.525 [2024-07-13 08:20:21.005295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:29.525 [2024-07-13 08:20:21.015905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e3d08 00:33:29.525 [2024-07-13 08:20:21.017958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.525 [2024-07-13 08:20:21.017989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:29.525 [2024-07-13 08:20:21.027782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e1710 00:33:29.525 [2024-07-13 08:20:21.028707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.525 [2024-07-13 08:20:21.028739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:29.525 [2024-07-13 08:20:21.040836] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190ea680 00:33:29.525 [2024-07-13 08:20:21.041933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.525 [2024-07-13 08:20:21.041965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:29.525 [2024-07-13 08:20:21.052802] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f6020 00:33:29.525 [2024-07-13 08:20:21.053889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.525 [2024-07-13 08:20:21.053919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:29.525 [2024-07-13 08:20:21.066850] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190ebfd0 00:33:29.525 [2024-07-13 08:20:21.068129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.525 [2024-07-13 08:20:21.068161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:29.525 [2024-07-13 08:20:21.079860] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e5220 00:33:29.525 [2024-07-13 08:20:21.081307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.525 [2024-07-13 08:20:21.081339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:29.525 [2024-07-13 08:20:21.093116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190e5ec8 00:33:29.525 [2024-07-13 08:20:21.094727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.525 [2024-07-13 08:20:21.094759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:29.525 [2024-07-13 08:20:21.103838] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f0350 00:33:29.525 [2024-07-13 08:20:21.104570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.525 [2024-07-13 08:20:21.104601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:29.525 [2024-07-13 08:20:21.117070] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25990) with pdu=0x2000190f7538 00:33:29.525 [2024-07-13 08:20:21.117980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:29.525 [2024-07-13 08:20:21.118011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:29.525 00:33:29.525 Latency(us) 00:33:29.525 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.525 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:29.525 nvme0n1 : 2.00 20096.98 78.50 0.00 0.00 6362.13 2621.44 17767.54 00:33:29.525 =================================================================================================================== 00:33:29.525 Total : 20096.98 78.50 0.00 0.00 6362.13 2621.44 17767.54 00:33:29.525 0 00:33:29.525 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:29.525 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:29.525 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:29.525 | .driver_specific 00:33:29.525 | .nvme_error 00:33:29.525 | .status_code 00:33:29.525 | .command_transient_transport_error' 00:33:29.525 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:29.783 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 157 > 0 )) 00:33:29.783 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2107157 00:33:29.783 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2107157 ']' 00:33:29.783 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2107157 00:33:29.783 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:29.783 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:29.783 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2107157 00:33:29.783 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:29.783 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:29.783 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2107157' 00:33:29.783 killing process with pid 2107157 00:33:29.783 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2107157 00:33:29.783 Received shutdown signal, test time was about 2.000000 seconds 00:33:29.783 00:33:29.783 Latency(us) 00:33:29.783 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.783 =================================================================================================================== 00:33:29.783 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:29.783 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2107157 00:33:30.041 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:30.041 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:30.041 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:30.041 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:30.041 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:30.041 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2107566 00:33:30.041 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:30.041 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2107566 /var/tmp/bperf.sock 00:33:30.041 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 2107566 ']' 00:33:30.041 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:30.041 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:30.041 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:30.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:30.041 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:30.041 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:30.041 [2024-07-13 08:20:21.709666] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:30.041 [2024-07-13 08:20:21.709765] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2107566 ] 00:33:30.041 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:30.041 Zero copy mechanism will not be used. 00:33:30.041 EAL: No free 2048 kB hugepages reported on node 1 00:33:30.041 [2024-07-13 08:20:21.767628] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:30.299 [2024-07-13 08:20:21.853347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:30.299 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:30.299 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:30.299 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:30.299 08:20:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:30.557 08:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:30.557 08:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:30.557 08:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:30.557 08:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:30.557 08:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:30.557 08:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:31.123 nvme0n1 00:33:31.123 08:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:31.123 08:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:31.123 08:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:31.123 08:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:31.123 08:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:31.123 08:20:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:31.123 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:31.123 Zero copy mechanism will not be used. 00:33:31.123 Running I/O for 2 seconds... 00:33:31.123 [2024-07-13 08:20:22.826414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.123 [2024-07-13 08:20:22.826814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.123 [2024-07-13 08:20:22.826859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.123 [2024-07-13 08:20:22.839622] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.123 [2024-07-13 08:20:22.840029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.123 [2024-07-13 08:20:22.840075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.123 [2024-07-13 08:20:22.853564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.123 [2024-07-13 08:20:22.853983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.123 [2024-07-13 08:20:22.854020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.381 [2024-07-13 08:20:22.866634] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.381 [2024-07-13 08:20:22.867033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.381 [2024-07-13 08:20:22.867078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.381 [2024-07-13 08:20:22.878660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.381 [2024-07-13 08:20:22.879058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.381 [2024-07-13 08:20:22.879087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.381 [2024-07-13 08:20:22.891547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.381 [2024-07-13 08:20:22.891940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.381 [2024-07-13 08:20:22.891982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.381 [2024-07-13 08:20:22.906600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.381 [2024-07-13 08:20:22.907001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.381 [2024-07-13 08:20:22.907031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.381 [2024-07-13 08:20:22.919201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.381 [2024-07-13 08:20:22.919579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.381 [2024-07-13 08:20:22.919614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.381 [2024-07-13 08:20:22.930672] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.381 [2024-07-13 08:20:22.931051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.381 [2024-07-13 08:20:22.931082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.381 [2024-07-13 08:20:22.943326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.381 [2024-07-13 08:20:22.943700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.381 [2024-07-13 08:20:22.943735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.381 [2024-07-13 08:20:22.955665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.381 [2024-07-13 08:20:22.956044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.381 [2024-07-13 08:20:22.956088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.381 [2024-07-13 08:20:22.967258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.381 [2024-07-13 08:20:22.967603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.381 [2024-07-13 08:20:22.967632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.381 [2024-07-13 08:20:22.978839] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.381 [2024-07-13 08:20:22.979050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.381 [2024-07-13 08:20:22.979078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.381 [2024-07-13 08:20:22.990109] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.381 [2024-07-13 08:20:22.990450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.381 [2024-07-13 08:20:22.990494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.381 [2024-07-13 08:20:23.001585] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.381 [2024-07-13 08:20:23.001728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.381 [2024-07-13 08:20:23.001756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.381 [2024-07-13 08:20:23.013842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.381 [2024-07-13 08:20:23.014206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.381 [2024-07-13 08:20:23.014250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.381 [2024-07-13 08:20:23.025732] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.381 [2024-07-13 08:20:23.026072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.381 [2024-07-13 08:20:23.026117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.381 [2024-07-13 08:20:23.037678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.381 [2024-07-13 08:20:23.038039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.381 [2024-07-13 08:20:23.038083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.381 [2024-07-13 08:20:23.048972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.381 [2024-07-13 08:20:23.049331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.381 [2024-07-13 08:20:23.049373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.381 [2024-07-13 08:20:23.060997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.381 [2024-07-13 08:20:23.061352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.381 [2024-07-13 08:20:23.061402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.381 [2024-07-13 08:20:23.072653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.381 [2024-07-13 08:20:23.073008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.381 [2024-07-13 08:20:23.073039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.381 [2024-07-13 08:20:23.084089] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.381 [2024-07-13 08:20:23.084421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.381 [2024-07-13 08:20:23.084449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.381 [2024-07-13 08:20:23.095398] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.381 [2024-07-13 08:20:23.095746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.381 [2024-07-13 08:20:23.095793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.381 [2024-07-13 08:20:23.106754] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.381 [2024-07-13 08:20:23.107115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.381 [2024-07-13 08:20:23.107143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.118298] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.118664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.118693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.129658] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.130026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.130056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.141881] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.142241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.142286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.153204] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.153550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.153579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.164515] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.164885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.164932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.176472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.176830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.176858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.188313] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.188583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.188612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.199265] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.199638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.199667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.210846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.211221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.211265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.222646] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.223021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.223065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.234712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.235084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.235115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.244963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.245310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.245340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.256251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.256480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.256509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.268332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.268723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.268751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.279720] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.280078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.280124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.291524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.291893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.291937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.303744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.304117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.304161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.315309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.315681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.315724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.326528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.326898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.326941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.338379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.338702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.338732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.350074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.350422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.350453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.639 [2024-07-13 08:20:23.361963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.639 [2024-07-13 08:20:23.362287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.639 [2024-07-13 08:20:23.362322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.373228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.373613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.373644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.384494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.384837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.384891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.395846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.396206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.396260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.407323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.407667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.407696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.419085] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.419421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.419452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.430493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.430716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.430745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.442373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.442719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.442748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.453745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.454090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.454120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.465784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.466142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.466172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.476816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.477195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.477240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.487696] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.488028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.488057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.499422] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.499743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.499773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.510370] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.510724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.510768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.522262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.522622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.522651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.534166] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.534512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.534541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.545986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.546335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.546382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.557017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.557364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.557408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.568235] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.568581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.568623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.579986] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.580119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.580148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.591324] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.591685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.591714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.603137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.603486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.603531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.615108] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.615459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.615507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:31.897 [2024-07-13 08:20:23.626834] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:31.897 [2024-07-13 08:20:23.627189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:31.897 [2024-07-13 08:20:23.627218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.156 [2024-07-13 08:20:23.638523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.156 [2024-07-13 08:20:23.638883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.156 [2024-07-13 08:20:23.638930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.156 [2024-07-13 08:20:23.649120] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.156 [2024-07-13 08:20:23.649463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.156 [2024-07-13 08:20:23.649493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.156 [2024-07-13 08:20:23.660144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.156 [2024-07-13 08:20:23.660499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.156 [2024-07-13 08:20:23.660535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.156 [2024-07-13 08:20:23.671597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.156 [2024-07-13 08:20:23.671998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.156 [2024-07-13 08:20:23.672029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.156 [2024-07-13 08:20:23.683093] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.156 [2024-07-13 08:20:23.683464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.156 [2024-07-13 08:20:23.683493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.156 [2024-07-13 08:20:23.695122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.156 [2024-07-13 08:20:23.695499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.156 [2024-07-13 08:20:23.695542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.157 [2024-07-13 08:20:23.707080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.157 [2024-07-13 08:20:23.707426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.157 [2024-07-13 08:20:23.707472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.157 [2024-07-13 08:20:23.718261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.157 [2024-07-13 08:20:23.718605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.157 [2024-07-13 08:20:23.718650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.157 [2024-07-13 08:20:23.730001] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.157 [2024-07-13 08:20:23.730350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.157 [2024-07-13 08:20:23.730398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.157 [2024-07-13 08:20:23.739750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.157 [2024-07-13 08:20:23.740078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.157 [2024-07-13 08:20:23.740109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.157 [2024-07-13 08:20:23.751441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.157 [2024-07-13 08:20:23.751803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.157 [2024-07-13 08:20:23.751831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.157 [2024-07-13 08:20:23.763098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.157 [2024-07-13 08:20:23.763270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.157 [2024-07-13 08:20:23.763300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.157 [2024-07-13 08:20:23.773903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.157 [2024-07-13 08:20:23.774278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.157 [2024-07-13 08:20:23.774308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.157 [2024-07-13 08:20:23.784487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.157 [2024-07-13 08:20:23.784920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.157 [2024-07-13 08:20:23.784950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.157 [2024-07-13 08:20:23.795467] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.157 [2024-07-13 08:20:23.795793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.157 [2024-07-13 08:20:23.795823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.157 [2024-07-13 08:20:23.805738] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.157 [2024-07-13 08:20:23.806150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.157 [2024-07-13 08:20:23.806181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.157 [2024-07-13 08:20:23.816030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.157 [2024-07-13 08:20:23.816356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.157 [2024-07-13 08:20:23.816385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.157 [2024-07-13 08:20:23.826694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.157 [2024-07-13 08:20:23.827076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.157 [2024-07-13 08:20:23.827104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.157 [2024-07-13 08:20:23.837263] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.157 [2024-07-13 08:20:23.837644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.157 [2024-07-13 08:20:23.837675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.157 [2024-07-13 08:20:23.847702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.157 [2024-07-13 08:20:23.848069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.157 [2024-07-13 08:20:23.848099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.157 [2024-07-13 08:20:23.858019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.157 [2024-07-13 08:20:23.858447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.157 [2024-07-13 08:20:23.858477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.157 [2024-07-13 08:20:23.868694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.157 [2024-07-13 08:20:23.869013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.157 [2024-07-13 08:20:23.869043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.157 [2024-07-13 08:20:23.879592] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.157 [2024-07-13 08:20:23.879967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.157 [2024-07-13 08:20:23.879997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.415 [2024-07-13 08:20:23.889851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.415 [2024-07-13 08:20:23.890192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.415 [2024-07-13 08:20:23.890222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.415 [2024-07-13 08:20:23.900100] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.415 [2024-07-13 08:20:23.900434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.415 [2024-07-13 08:20:23.900464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.415 [2024-07-13 08:20:23.910915] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.415 [2024-07-13 08:20:23.911311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.415 [2024-07-13 08:20:23.911340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.415 [2024-07-13 08:20:23.921777] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.415 [2024-07-13 08:20:23.922189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.415 [2024-07-13 08:20:23.922234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.415 [2024-07-13 08:20:23.932951] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.415 [2024-07-13 08:20:23.933281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.415 [2024-07-13 08:20:23.933309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.415 [2024-07-13 08:20:23.943178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.415 [2024-07-13 08:20:23.943601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.415 [2024-07-13 08:20:23.943637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.415 [2024-07-13 08:20:23.953483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.415 [2024-07-13 08:20:23.953813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.415 [2024-07-13 08:20:23.953842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.415 [2024-07-13 08:20:23.963041] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.415 [2024-07-13 08:20:23.963436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.415 [2024-07-13 08:20:23.963465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.415 [2024-07-13 08:20:23.973728] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.415 [2024-07-13 08:20:23.974179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.415 [2024-07-13 08:20:23.974223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.415 [2024-07-13 08:20:23.984018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.415 [2024-07-13 08:20:23.984391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.415 [2024-07-13 08:20:23.984420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.415 [2024-07-13 08:20:23.994250] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.415 [2024-07-13 08:20:23.994605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.416 [2024-07-13 08:20:23.994635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.416 [2024-07-13 08:20:24.005145] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.416 [2024-07-13 08:20:24.005672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.416 [2024-07-13 08:20:24.005701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.416 [2024-07-13 08:20:24.016315] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.416 [2024-07-13 08:20:24.016853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.416 [2024-07-13 08:20:24.016890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.416 [2024-07-13 08:20:24.026172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.416 [2024-07-13 08:20:24.026577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.416 [2024-07-13 08:20:24.026607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.416 [2024-07-13 08:20:24.036448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.416 [2024-07-13 08:20:24.036860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.416 [2024-07-13 08:20:24.036896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.416 [2024-07-13 08:20:24.046444] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.416 [2024-07-13 08:20:24.046840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.416 [2024-07-13 08:20:24.046875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.416 [2024-07-13 08:20:24.056846] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.416 [2024-07-13 08:20:24.057230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.416 [2024-07-13 08:20:24.057260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.416 [2024-07-13 08:20:24.067702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.416 [2024-07-13 08:20:24.068150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.416 [2024-07-13 08:20:24.068180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.416 [2024-07-13 08:20:24.077784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.416 [2024-07-13 08:20:24.078232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.416 [2024-07-13 08:20:24.078262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.416 [2024-07-13 08:20:24.088652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.416 [2024-07-13 08:20:24.089103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.416 [2024-07-13 08:20:24.089134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.416 [2024-07-13 08:20:24.100264] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.416 [2024-07-13 08:20:24.100719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.416 [2024-07-13 08:20:24.100750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.416 [2024-07-13 08:20:24.111082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.416 [2024-07-13 08:20:24.111487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.416 [2024-07-13 08:20:24.111517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.416 [2024-07-13 08:20:24.122149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.416 [2024-07-13 08:20:24.122526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.416 [2024-07-13 08:20:24.122556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.416 [2024-07-13 08:20:24.132935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.416 [2024-07-13 08:20:24.133363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.416 [2024-07-13 08:20:24.133393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.416 [2024-07-13 08:20:24.143847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.416 [2024-07-13 08:20:24.144173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.416 [2024-07-13 08:20:24.144203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.675 [2024-07-13 08:20:24.153384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.675 [2024-07-13 08:20:24.153792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.675 [2024-07-13 08:20:24.153823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.675 [2024-07-13 08:20:24.164289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.675 [2024-07-13 08:20:24.164698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.675 [2024-07-13 08:20:24.164728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.675 [2024-07-13 08:20:24.175110] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.675 [2024-07-13 08:20:24.175478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.675 [2024-07-13 08:20:24.175508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.675 [2024-07-13 08:20:24.185798] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.675 [2024-07-13 08:20:24.186212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.675 [2024-07-13 08:20:24.186241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.675 [2024-07-13 08:20:24.196413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.675 [2024-07-13 08:20:24.196772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.675 [2024-07-13 08:20:24.196801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.675 [2024-07-13 08:20:24.206177] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.675 [2024-07-13 08:20:24.206590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.675 [2024-07-13 08:20:24.206619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.675 [2024-07-13 08:20:24.216374] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.675 [2024-07-13 08:20:24.216755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.675 [2024-07-13 08:20:24.216791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.675 [2024-07-13 08:20:24.225879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.675 [2024-07-13 08:20:24.226201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.675 [2024-07-13 08:20:24.226231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.675 [2024-07-13 08:20:24.236844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.675 [2024-07-13 08:20:24.237144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.675 [2024-07-13 08:20:24.237173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.675 [2024-07-13 08:20:24.247073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.675 [2024-07-13 08:20:24.247440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.675 [2024-07-13 08:20:24.247469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.675 [2024-07-13 08:20:24.257978] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.675 [2024-07-13 08:20:24.258350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.675 [2024-07-13 08:20:24.258380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.675 [2024-07-13 08:20:24.269157] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.675 [2024-07-13 08:20:24.269613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.675 [2024-07-13 08:20:24.269642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.675 [2024-07-13 08:20:24.279243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.675 [2024-07-13 08:20:24.279649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.675 [2024-07-13 08:20:24.279678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.675 [2024-07-13 08:20:24.288683] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.675 [2024-07-13 08:20:24.289094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.675 [2024-07-13 08:20:24.289123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.675 [2024-07-13 08:20:24.299464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.675 [2024-07-13 08:20:24.299837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.675 [2024-07-13 08:20:24.299872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.675 [2024-07-13 08:20:24.309141] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.675 [2024-07-13 08:20:24.309491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.675 [2024-07-13 08:20:24.309534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.675 [2024-07-13 08:20:24.318819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.675 [2024-07-13 08:20:24.319246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.675 [2024-07-13 08:20:24.319275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.675 [2024-07-13 08:20:24.329024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.675 [2024-07-13 08:20:24.329416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.675 [2024-07-13 08:20:24.329445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.675 [2024-07-13 08:20:24.339927] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.675 [2024-07-13 08:20:24.340299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.675 [2024-07-13 08:20:24.340328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.675 [2024-07-13 08:20:24.350652] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.675 [2024-07-13 08:20:24.351076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.675 [2024-07-13 08:20:24.351105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.675 [2024-07-13 08:20:24.361416] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.675 [2024-07-13 08:20:24.361760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.676 [2024-07-13 08:20:24.361788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.676 [2024-07-13 08:20:24.371823] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.676 [2024-07-13 08:20:24.372168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.676 [2024-07-13 08:20:24.372196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.676 [2024-07-13 08:20:24.382357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.676 [2024-07-13 08:20:24.382700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.676 [2024-07-13 08:20:24.382728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.676 [2024-07-13 08:20:24.393107] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.676 [2024-07-13 08:20:24.393411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.676 [2024-07-13 08:20:24.393440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.676 [2024-07-13 08:20:24.403782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.676 [2024-07-13 08:20:24.404144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.676 [2024-07-13 08:20:24.404172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.934 [2024-07-13 08:20:24.413899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.934 [2024-07-13 08:20:24.414324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.934 [2024-07-13 08:20:24.414353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.934 [2024-07-13 08:20:24.424539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.934 [2024-07-13 08:20:24.424888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.934 [2024-07-13 08:20:24.424918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.934 [2024-07-13 08:20:24.434973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.934 [2024-07-13 08:20:24.435267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.934 [2024-07-13 08:20:24.435296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.934 [2024-07-13 08:20:24.445878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.934 [2024-07-13 08:20:24.446211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.934 [2024-07-13 08:20:24.446240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.935 [2024-07-13 08:20:24.456325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.935 [2024-07-13 08:20:24.456704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.935 [2024-07-13 08:20:24.456732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.935 [2024-07-13 08:20:24.467258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.935 [2024-07-13 08:20:24.467601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.935 [2024-07-13 08:20:24.467630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.935 [2024-07-13 08:20:24.478366] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.935 [2024-07-13 08:20:24.478730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.935 [2024-07-13 08:20:24.478758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.935 [2024-07-13 08:20:24.488527] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.935 [2024-07-13 08:20:24.488871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.935 [2024-07-13 08:20:24.488908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.935 [2024-07-13 08:20:24.499217] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.935 [2024-07-13 08:20:24.499602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.935 [2024-07-13 08:20:24.499645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.935 [2024-07-13 08:20:24.510283] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.935 [2024-07-13 08:20:24.510645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.935 [2024-07-13 08:20:24.510674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.935 [2024-07-13 08:20:24.521144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.935 [2024-07-13 08:20:24.521569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.935 [2024-07-13 08:20:24.521598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.935 [2024-07-13 08:20:24.532260] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.935 [2024-07-13 08:20:24.532702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.935 [2024-07-13 08:20:24.532730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.935 [2024-07-13 08:20:24.543687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.935 [2024-07-13 08:20:24.544073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.935 [2024-07-13 08:20:24.544102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.935 [2024-07-13 08:20:24.554736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.935 [2024-07-13 08:20:24.555145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.935 [2024-07-13 08:20:24.555176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.935 [2024-07-13 08:20:24.565126] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.935 [2024-07-13 08:20:24.565492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.935 [2024-07-13 08:20:24.565522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.935 [2024-07-13 08:20:24.575758] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.935 [2024-07-13 08:20:24.576171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.935 [2024-07-13 08:20:24.576200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.935 [2024-07-13 08:20:24.587254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.935 [2024-07-13 08:20:24.587631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.935 [2024-07-13 08:20:24.587660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.935 [2024-07-13 08:20:24.598167] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.935 [2024-07-13 08:20:24.598553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.935 [2024-07-13 08:20:24.598580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.935 [2024-07-13 08:20:24.608760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.935 [2024-07-13 08:20:24.609215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.935 [2024-07-13 08:20:24.609245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.935 [2024-07-13 08:20:24.619617] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.935 [2024-07-13 08:20:24.619946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.935 [2024-07-13 08:20:24.619976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:32.935 [2024-07-13 08:20:24.630441] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.935 [2024-07-13 08:20:24.630951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.935 [2024-07-13 08:20:24.630980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:32.935 [2024-07-13 08:20:24.641578] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.935 [2024-07-13 08:20:24.641947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.935 [2024-07-13 08:20:24.641975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:32.935 [2024-07-13 08:20:24.652423] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.935 [2024-07-13 08:20:24.652692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.935 [2024-07-13 08:20:24.652721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:32.935 [2024-07-13 08:20:24.663365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:32.935 [2024-07-13 08:20:24.663722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:32.935 [2024-07-13 08:20:24.663751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.194 [2024-07-13 08:20:24.673531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:33.194 [2024-07-13 08:20:24.673879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.194 [2024-07-13 08:20:24.673909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.194 [2024-07-13 08:20:24.684097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:33.194 [2024-07-13 08:20:24.684439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.194 [2024-07-13 08:20:24.684468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.194 [2024-07-13 08:20:24.694633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:33.194 [2024-07-13 08:20:24.694973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.194 [2024-07-13 08:20:24.695002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.194 [2024-07-13 08:20:24.703906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:33.194 [2024-07-13 08:20:24.704226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.194 [2024-07-13 08:20:24.704254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.194 [2024-07-13 08:20:24.714443] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:33.194 [2024-07-13 08:20:24.714816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.194 [2024-07-13 08:20:24.714845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.194 [2024-07-13 08:20:24.724351] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:33.194 [2024-07-13 08:20:24.724719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.194 [2024-07-13 08:20:24.724748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.194 [2024-07-13 08:20:24.733857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:33.194 [2024-07-13 08:20:24.734159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.194 [2024-07-13 08:20:24.734187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.194 [2024-07-13 08:20:24.744505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:33.194 [2024-07-13 08:20:24.744957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.194 [2024-07-13 08:20:24.744986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.194 [2024-07-13 08:20:24.754653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:33.194 [2024-07-13 08:20:24.755033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.194 [2024-07-13 08:20:24.755061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.194 [2024-07-13 08:20:24.765267] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:33.194 [2024-07-13 08:20:24.765650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.194 [2024-07-13 08:20:24.765684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.194 [2024-07-13 08:20:24.775653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:33.194 [2024-07-13 08:20:24.775936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.194 [2024-07-13 08:20:24.775964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:33.194 [2024-07-13 08:20:24.786376] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:33.194 [2024-07-13 08:20:24.786854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.194 [2024-07-13 08:20:24.786903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:33.194 [2024-07-13 08:20:24.796081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:33.194 [2024-07-13 08:20:24.796504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.194 [2024-07-13 08:20:24.796532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:33.194 [2024-07-13 08:20:24.805771] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xc25cd0) with pdu=0x2000190fef90 00:33:33.194 [2024-07-13 08:20:24.806111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:33.194 [2024-07-13 08:20:24.806140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:33.194 00:33:33.194 Latency(us) 00:33:33.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.195 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:33.195 nvme0n1 : 2.01 2797.61 349.70 0.00 0.00 5706.15 4126.34 14466.47 00:33:33.195 =================================================================================================================== 00:33:33.195 Total : 2797.61 349.70 0.00 0.00 5706.15 4126.34 14466.47 00:33:33.195 0 00:33:33.195 08:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:33.195 08:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:33.195 08:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:33.195 08:20:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:33.195 | .driver_specific 00:33:33.195 | .nvme_error 00:33:33.195 | .status_code 00:33:33.195 | .command_transient_transport_error' 00:33:33.453 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 180 > 0 )) 00:33:33.453 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2107566 00:33:33.453 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2107566 ']' 00:33:33.453 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2107566 00:33:33.453 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:33.453 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:33.453 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2107566 00:33:33.453 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:33.453 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:33.453 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2107566' 00:33:33.453 killing process with pid 2107566 00:33:33.453 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2107566 00:33:33.453 Received shutdown signal, test time was about 2.000000 seconds 00:33:33.453 00:33:33.453 Latency(us) 00:33:33.453 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:33.453 =================================================================================================================== 00:33:33.453 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:33.453 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2107566 00:33:33.710 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2106206 00:33:33.710 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 2106206 ']' 00:33:33.710 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 2106206 00:33:33.710 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:33.710 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:33.710 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2106206 00:33:33.710 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:33.710 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:33.710 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2106206' 00:33:33.710 killing process with pid 2106206 00:33:33.710 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 2106206 00:33:33.710 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 2106206 00:33:33.970 00:33:33.970 real 0m15.036s 00:33:33.970 user 0m30.108s 00:33:33.970 sys 0m3.906s 00:33:33.970 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:33.970 08:20:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:33.970 ************************************ 00:33:33.970 END TEST nvmf_digest_error 00:33:33.970 ************************************ 00:33:33.970 08:20:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:33:33.970 08:20:25 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:33.970 08:20:25 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:33.970 08:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:33.970 08:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:33.970 08:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:33.970 08:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:33.970 08:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:33.970 08:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:33.970 rmmod nvme_tcp 00:33:33.970 rmmod nvme_fabrics 00:33:33.970 rmmod nvme_keyring 00:33:33.970 08:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:34.228 08:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:34.229 08:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:34.229 08:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2106206 ']' 00:33:34.229 08:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2106206 00:33:34.229 08:20:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 2106206 ']' 00:33:34.229 08:20:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 2106206 00:33:34.229 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2106206) - No such process 00:33:34.229 08:20:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 2106206 is not found' 00:33:34.229 Process with pid 2106206 is not found 00:33:34.229 08:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:34.229 08:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:34.229 08:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:34.229 08:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:34.229 08:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:34.229 08:20:25 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:34.229 08:20:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:34.229 08:20:25 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:36.130 08:20:27 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:36.130 00:33:36.130 real 0m34.683s 00:33:36.130 user 1m0.906s 00:33:36.130 sys 0m9.658s 00:33:36.130 08:20:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:36.130 08:20:27 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:36.130 ************************************ 00:33:36.130 END TEST nvmf_digest 00:33:36.130 ************************************ 00:33:36.130 08:20:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:36.130 08:20:27 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:33:36.130 08:20:27 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:33:36.130 08:20:27 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:33:36.130 08:20:27 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:36.130 08:20:27 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:36.130 08:20:27 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:36.130 08:20:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:36.130 ************************************ 00:33:36.130 START TEST nvmf_bdevperf 00:33:36.130 ************************************ 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:33:36.130 * Looking for test storage... 00:33:36.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:33:36.130 08:20:27 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:38.660 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:38.660 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.660 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:38.660 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:38.661 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:38.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:38.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:33:38.661 00:33:38.661 --- 10.0.0.2 ping statistics --- 00:33:38.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.661 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:38.661 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:38.661 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:33:38.661 00:33:38.661 --- 10.0.0.1 ping statistics --- 00:33:38.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:38.661 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2109915 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2109915 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2109915 ']' 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:38.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:38.661 08:20:29 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:38.661 [2024-07-13 08:20:30.026591] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:38.661 [2024-07-13 08:20:30.026692] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:38.661 EAL: No free 2048 kB hugepages reported on node 1 00:33:38.661 [2024-07-13 08:20:30.106002] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:38.661 [2024-07-13 08:20:30.201260] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:38.661 [2024-07-13 08:20:30.201326] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:38.661 [2024-07-13 08:20:30.201352] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:38.661 [2024-07-13 08:20:30.201365] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:38.661 [2024-07-13 08:20:30.201377] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:38.661 [2024-07-13 08:20:30.201465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:38.661 [2024-07-13 08:20:30.201523] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:38.661 [2024-07-13 08:20:30.201526] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.227 08:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:39.227 08:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:33:39.227 08:20:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:39.227 08:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:39.227 08:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:39.485 08:20:30 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:39.485 08:20:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:39.485 08:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.485 08:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:39.485 [2024-07-13 08:20:30.983070] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:39.485 08:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.485 08:20:30 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:39.486 08:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.486 08:20:30 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:39.486 Malloc0 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:39.486 [2024-07-13 08:20:31.045352] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:39.486 { 00:33:39.486 "params": { 00:33:39.486 "name": "Nvme$subsystem", 00:33:39.486 "trtype": "$TEST_TRANSPORT", 00:33:39.486 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:39.486 "adrfam": "ipv4", 00:33:39.486 "trsvcid": "$NVMF_PORT", 00:33:39.486 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:39.486 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:39.486 "hdgst": ${hdgst:-false}, 00:33:39.486 "ddgst": ${ddgst:-false} 00:33:39.486 }, 00:33:39.486 "method": "bdev_nvme_attach_controller" 00:33:39.486 } 00:33:39.486 EOF 00:33:39.486 )") 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:39.486 08:20:31 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:39.486 "params": { 00:33:39.486 "name": "Nvme1", 00:33:39.486 "trtype": "tcp", 00:33:39.486 "traddr": "10.0.0.2", 00:33:39.486 "adrfam": "ipv4", 00:33:39.486 "trsvcid": "4420", 00:33:39.486 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:39.486 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:39.486 "hdgst": false, 00:33:39.486 "ddgst": false 00:33:39.486 }, 00:33:39.486 "method": "bdev_nvme_attach_controller" 00:33:39.486 }' 00:33:39.486 [2024-07-13 08:20:31.088939] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:39.486 [2024-07-13 08:20:31.089014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2110065 ] 00:33:39.486 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.486 [2024-07-13 08:20:31.149772] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.744 [2024-07-13 08:20:31.241852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:40.003 Running I/O for 1 seconds... 00:33:40.933 00:33:40.933 Latency(us) 00:33:40.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:40.933 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:40.933 Verification LBA range: start 0x0 length 0x4000 00:33:40.933 Nvme1n1 : 1.01 8570.70 33.48 0.00 0.00 14869.47 2815.62 15243.19 00:33:40.933 =================================================================================================================== 00:33:40.933 Total : 8570.70 33.48 0.00 0.00 14869.47 2815.62 15243.19 00:33:41.191 08:20:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2110321 00:33:41.191 08:20:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:33:41.191 08:20:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:33:41.191 08:20:32 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:33:41.191 08:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:33:41.191 08:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:33:41.191 08:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:33:41.191 08:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:33:41.191 { 00:33:41.191 "params": { 00:33:41.191 "name": "Nvme$subsystem", 00:33:41.191 "trtype": "$TEST_TRANSPORT", 00:33:41.191 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:41.191 "adrfam": "ipv4", 00:33:41.191 "trsvcid": "$NVMF_PORT", 00:33:41.191 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:41.191 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:41.191 "hdgst": ${hdgst:-false}, 00:33:41.191 "ddgst": ${ddgst:-false} 00:33:41.191 }, 00:33:41.191 "method": "bdev_nvme_attach_controller" 00:33:41.191 } 00:33:41.191 EOF 00:33:41.191 )") 00:33:41.191 08:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:33:41.191 08:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:33:41.191 08:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:33:41.191 08:20:32 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:33:41.191 "params": { 00:33:41.191 "name": "Nvme1", 00:33:41.191 "trtype": "tcp", 00:33:41.191 "traddr": "10.0.0.2", 00:33:41.191 "adrfam": "ipv4", 00:33:41.191 "trsvcid": "4420", 00:33:41.191 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:41.191 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:41.191 "hdgst": false, 00:33:41.191 "ddgst": false 00:33:41.191 }, 00:33:41.191 "method": "bdev_nvme_attach_controller" 00:33:41.191 }' 00:33:41.191 [2024-07-13 08:20:32.813799] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:41.191 [2024-07-13 08:20:32.813910] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2110321 ] 00:33:41.191 EAL: No free 2048 kB hugepages reported on node 1 00:33:41.191 [2024-07-13 08:20:32.875294] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:41.448 [2024-07-13 08:20:32.960790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:41.448 Running I/O for 15 seconds... 00:33:44.730 08:20:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2109915 00:33:44.730 08:20:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:33:44.730 [2024-07-13 08:20:35.787881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:47568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.787959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.787995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:47576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:47592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:47600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:47624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:47632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:47656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:47664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:47672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:47712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:47720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.730 [2024-07-13 08:20:35.788795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.730 [2024-07-13 08:20:35.788829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.730 [2024-07-13 08:20:35.788861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.730 [2024-07-13 08:20:35.788926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.730 [2024-07-13 08:20:35.788955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.730 [2024-07-13 08:20:35.788970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.788984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.788999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:48064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:48072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:48080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:48088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:48096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:48104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:48112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:48120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:48128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:48136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:48144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:48152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:48160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:48168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:48176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.789976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.789991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:48184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.790004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.790019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:48192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.790033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.790053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:48200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.790067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.790082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:48208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.790095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.790111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:48216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.790136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.790151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:48224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.790181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.790198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:48232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.790215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.790231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:48240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.790247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.790264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:48248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.790279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.790296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:48256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.790311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.790328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:48264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.790344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.790362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:48272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.731 [2024-07-13 08:20:35.790377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.731 [2024-07-13 08:20:35.790394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:48280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.790408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.790425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:48288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.790440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.790457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:48296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.790476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.790494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:48304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.790509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.790526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.790541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.790559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:48320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.790574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.790591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:48328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.790606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.790622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:48336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.790638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.790655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:48344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.790670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.790686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:48352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.790701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.790719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:48360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.790734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.790751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:48368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.790769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.790787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:48376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.790804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.790822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:48384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.790838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.790855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:48392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.790878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.790897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:48400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.790931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.790948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:47760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.732 [2024-07-13 08:20:35.790963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.790979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:47768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.732 [2024-07-13 08:20:35.790993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:48408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:48416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:48424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:48432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:48440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:48448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:48456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:48472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:48480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:48488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:48496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:48504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:48512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:48520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:48536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:48544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:48552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:48560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:48568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:48576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:48584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:44.732 [2024-07-13 08:20:35.791777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:47776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.732 [2024-07-13 08:20:35.791811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.732 [2024-07-13 08:20:35.791835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.732 [2024-07-13 08:20:35.791851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.733 [2024-07-13 08:20:35.791875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:47792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.733 [2024-07-13 08:20:35.791893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.733 [2024-07-13 08:20:35.791929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:47800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.733 [2024-07-13 08:20:35.791944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.733 [2024-07-13 08:20:35.791960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:47808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.733 [2024-07-13 08:20:35.791974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.733 [2024-07-13 08:20:35.791990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.733 [2024-07-13 08:20:35.792004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.733 [2024-07-13 08:20:35.792019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.733 [2024-07-13 08:20:35.792033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.733 [2024-07-13 08:20:35.792049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:47832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.733 [2024-07-13 08:20:35.792063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.733 [2024-07-13 08:20:35.792078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:47840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.733 [2024-07-13 08:20:35.792092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.733 [2024-07-13 08:20:35.792107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:47848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.733 [2024-07-13 08:20:35.792132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.733 [2024-07-13 08:20:35.792167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.733 [2024-07-13 08:20:35.792184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.733 [2024-07-13 08:20:35.792201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:47864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.733 [2024-07-13 08:20:35.792217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.733 [2024-07-13 08:20:35.792238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:47872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.733 [2024-07-13 08:20:35.792255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.733 [2024-07-13 08:20:35.792273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:47880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.733 [2024-07-13 08:20:35.792288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.733 [2024-07-13 08:20:35.792305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:47888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:44.733 [2024-07-13 08:20:35.792320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.733 [2024-07-13 08:20:35.792338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ffd1f0 is same with the state(5) to be set 00:33:44.733 [2024-07-13 08:20:35.792359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:33:44.733 [2024-07-13 08:20:35.792373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:33:44.733 [2024-07-13 08:20:35.792387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47896 len:8 PRP1 0x0 PRP2 0x0 00:33:44.733 [2024-07-13 08:20:35.792408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.733 [2024-07-13 08:20:35.792479] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ffd1f0 was disconnected and freed. reset controller. 00:33:44.733 [2024-07-13 08:20:35.792564] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:44.733 [2024-07-13 08:20:35.792589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.733 [2024-07-13 08:20:35.792607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:44.733 [2024-07-13 08:20:35.792623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.733 [2024-07-13 08:20:35.792638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:44.733 [2024-07-13 08:20:35.792654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.733 [2024-07-13 08:20:35.792669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:44.733 [2024-07-13 08:20:35.792685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:44.733 [2024-07-13 08:20:35.792699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.733 [2024-07-13 08:20:35.796516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.733 [2024-07-13 08:20:35.796559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.733 [2024-07-13 08:20:35.797280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.733 [2024-07-13 08:20:35.797313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.733 [2024-07-13 08:20:35.797333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.733 [2024-07-13 08:20:35.797572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.733 [2024-07-13 08:20:35.797821] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.733 [2024-07-13 08:20:35.797846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.733 [2024-07-13 08:20:35.797875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.733 [2024-07-13 08:20:35.801496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.733 [2024-07-13 08:20:35.810763] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.733 [2024-07-13 08:20:35.811185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.733 [2024-07-13 08:20:35.811218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.733 [2024-07-13 08:20:35.811237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.733 [2024-07-13 08:20:35.811474] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.733 [2024-07-13 08:20:35.811714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.733 [2024-07-13 08:20:35.811739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.733 [2024-07-13 08:20:35.811754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.733 [2024-07-13 08:20:35.815320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.733 [2024-07-13 08:20:35.824788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.733 [2024-07-13 08:20:35.825251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.733 [2024-07-13 08:20:35.825285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.733 [2024-07-13 08:20:35.825304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.733 [2024-07-13 08:20:35.825542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.733 [2024-07-13 08:20:35.825785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.733 [2024-07-13 08:20:35.825809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.733 [2024-07-13 08:20:35.825825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.733 [2024-07-13 08:20:35.829391] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.733 [2024-07-13 08:20:35.838627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.733 [2024-07-13 08:20:35.839082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.733 [2024-07-13 08:20:35.839114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.733 [2024-07-13 08:20:35.839133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.733 [2024-07-13 08:20:35.839371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.733 [2024-07-13 08:20:35.839614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.733 [2024-07-13 08:20:35.839639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.733 [2024-07-13 08:20:35.839655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.733 [2024-07-13 08:20:35.843220] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.733 [2024-07-13 08:20:35.852462] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.733 [2024-07-13 08:20:35.852894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.733 [2024-07-13 08:20:35.852934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.733 [2024-07-13 08:20:35.852953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.733 [2024-07-13 08:20:35.853190] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.733 [2024-07-13 08:20:35.853433] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.733 [2024-07-13 08:20:35.853459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.733 [2024-07-13 08:20:35.853475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.733 [2024-07-13 08:20:35.857038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.733 [2024-07-13 08:20:35.866477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.733 [2024-07-13 08:20:35.866986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.733 [2024-07-13 08:20:35.867019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.733 [2024-07-13 08:20:35.867038] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.733 [2024-07-13 08:20:35.867275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.733 [2024-07-13 08:20:35.867518] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.734 [2024-07-13 08:20:35.867543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.734 [2024-07-13 08:20:35.867559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.734 [2024-07-13 08:20:35.871121] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.734 [2024-07-13 08:20:35.880371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.734 [2024-07-13 08:20:35.880834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.734 [2024-07-13 08:20:35.880863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.734 [2024-07-13 08:20:35.880905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.734 [2024-07-13 08:20:35.881162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.734 [2024-07-13 08:20:35.881404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.734 [2024-07-13 08:20:35.881429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.734 [2024-07-13 08:20:35.881445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.734 [2024-07-13 08:20:35.885005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.734 [2024-07-13 08:20:35.894236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.734 [2024-07-13 08:20:35.894656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.734 [2024-07-13 08:20:35.894689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.734 [2024-07-13 08:20:35.894712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.734 [2024-07-13 08:20:35.894964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.734 [2024-07-13 08:20:35.895207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.734 [2024-07-13 08:20:35.895233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.734 [2024-07-13 08:20:35.895250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.734 [2024-07-13 08:20:35.898804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.734 [2024-07-13 08:20:35.908256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.734 [2024-07-13 08:20:35.908689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.734 [2024-07-13 08:20:35.908721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.734 [2024-07-13 08:20:35.908739] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.734 [2024-07-13 08:20:35.908990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.734 [2024-07-13 08:20:35.909232] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.734 [2024-07-13 08:20:35.909257] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.734 [2024-07-13 08:20:35.909273] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.734 [2024-07-13 08:20:35.912828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.734 [2024-07-13 08:20:35.922282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.734 [2024-07-13 08:20:35.922722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.734 [2024-07-13 08:20:35.922754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.734 [2024-07-13 08:20:35.922773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.734 [2024-07-13 08:20:35.923021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.734 [2024-07-13 08:20:35.923263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.734 [2024-07-13 08:20:35.923288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.734 [2024-07-13 08:20:35.923304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.734 [2024-07-13 08:20:35.926860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.734 [2024-07-13 08:20:35.936106] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.734 [2024-07-13 08:20:35.936560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.734 [2024-07-13 08:20:35.936592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.734 [2024-07-13 08:20:35.936610] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.734 [2024-07-13 08:20:35.936847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.734 [2024-07-13 08:20:35.937102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.734 [2024-07-13 08:20:35.937133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.734 [2024-07-13 08:20:35.937150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.734 [2024-07-13 08:20:35.940706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.734 [2024-07-13 08:20:35.949948] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.734 [2024-07-13 08:20:35.950371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.734 [2024-07-13 08:20:35.950403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.734 [2024-07-13 08:20:35.950422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.734 [2024-07-13 08:20:35.950661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.734 [2024-07-13 08:20:35.950917] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.734 [2024-07-13 08:20:35.950944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.734 [2024-07-13 08:20:35.950961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.734 [2024-07-13 08:20:35.954516] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.734 [2024-07-13 08:20:35.963961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.734 [2024-07-13 08:20:35.964396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.734 [2024-07-13 08:20:35.964428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.734 [2024-07-13 08:20:35.964447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.734 [2024-07-13 08:20:35.964685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.734 [2024-07-13 08:20:35.964939] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.734 [2024-07-13 08:20:35.964965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.734 [2024-07-13 08:20:35.964983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.734 [2024-07-13 08:20:35.968532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.734 [2024-07-13 08:20:35.977990] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.734 [2024-07-13 08:20:35.978423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.734 [2024-07-13 08:20:35.978455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.734 [2024-07-13 08:20:35.978474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.734 [2024-07-13 08:20:35.978711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.734 [2024-07-13 08:20:35.978965] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.734 [2024-07-13 08:20:35.978990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.734 [2024-07-13 08:20:35.979005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.734 [2024-07-13 08:20:35.982561] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.734 [2024-07-13 08:20:35.992017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.734 [2024-07-13 08:20:35.992429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.734 [2024-07-13 08:20:35.992461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.734 [2024-07-13 08:20:35.992479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.734 [2024-07-13 08:20:35.992718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.734 [2024-07-13 08:20:35.992976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.734 [2024-07-13 08:20:35.993001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.734 [2024-07-13 08:20:35.993016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.735 [2024-07-13 08:20:35.996572] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.735 [2024-07-13 08:20:36.006032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.735 [2024-07-13 08:20:36.006487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.735 [2024-07-13 08:20:36.006519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.735 [2024-07-13 08:20:36.006537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.735 [2024-07-13 08:20:36.006774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.735 [2024-07-13 08:20:36.007027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.735 [2024-07-13 08:20:36.007052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.735 [2024-07-13 08:20:36.007068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.735 [2024-07-13 08:20:36.010622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.735 [2024-07-13 08:20:36.019885] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.735 [2024-07-13 08:20:36.020324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.735 [2024-07-13 08:20:36.020356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.735 [2024-07-13 08:20:36.020374] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.735 [2024-07-13 08:20:36.020611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.735 [2024-07-13 08:20:36.020853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.735 [2024-07-13 08:20:36.020887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.735 [2024-07-13 08:20:36.020905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.735 [2024-07-13 08:20:36.024460] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.735 [2024-07-13 08:20:36.033913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.735 [2024-07-13 08:20:36.034333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.735 [2024-07-13 08:20:36.034365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.735 [2024-07-13 08:20:36.034383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.735 [2024-07-13 08:20:36.034626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.735 [2024-07-13 08:20:36.034877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.735 [2024-07-13 08:20:36.034902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.735 [2024-07-13 08:20:36.034918] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.735 [2024-07-13 08:20:36.038470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.735 [2024-07-13 08:20:36.047913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.735 [2024-07-13 08:20:36.048317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.735 [2024-07-13 08:20:36.048349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.735 [2024-07-13 08:20:36.048367] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.735 [2024-07-13 08:20:36.048604] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.735 [2024-07-13 08:20:36.048846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.735 [2024-07-13 08:20:36.048879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.735 [2024-07-13 08:20:36.048897] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.735 [2024-07-13 08:20:36.052448] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.735 [2024-07-13 08:20:36.061899] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.735 [2024-07-13 08:20:36.062326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.735 [2024-07-13 08:20:36.062358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.735 [2024-07-13 08:20:36.062376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.735 [2024-07-13 08:20:36.062613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.735 [2024-07-13 08:20:36.062854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.735 [2024-07-13 08:20:36.062887] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.735 [2024-07-13 08:20:36.062906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.735 [2024-07-13 08:20:36.066456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.735 [2024-07-13 08:20:36.075897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.735 [2024-07-13 08:20:36.076323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.735 [2024-07-13 08:20:36.076374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.735 [2024-07-13 08:20:36.076393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.735 [2024-07-13 08:20:36.076630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.735 [2024-07-13 08:20:36.076883] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.735 [2024-07-13 08:20:36.076908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.735 [2024-07-13 08:20:36.076932] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.735 [2024-07-13 08:20:36.080487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.735 [2024-07-13 08:20:36.089724] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.735 [2024-07-13 08:20:36.090118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.735 [2024-07-13 08:20:36.090150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.735 [2024-07-13 08:20:36.090168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.735 [2024-07-13 08:20:36.090405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.735 [2024-07-13 08:20:36.090647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.735 [2024-07-13 08:20:36.090673] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.735 [2024-07-13 08:20:36.090689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.735 [2024-07-13 08:20:36.094249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.735 [2024-07-13 08:20:36.103687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.735 [2024-07-13 08:20:36.104079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.735 [2024-07-13 08:20:36.104111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.735 [2024-07-13 08:20:36.104130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.735 [2024-07-13 08:20:36.104367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.735 [2024-07-13 08:20:36.104608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.735 [2024-07-13 08:20:36.104633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.735 [2024-07-13 08:20:36.104649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.735 [2024-07-13 08:20:36.108209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.735 [2024-07-13 08:20:36.117638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.735 [2024-07-13 08:20:36.118030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.735 [2024-07-13 08:20:36.118063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.735 [2024-07-13 08:20:36.118081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.735 [2024-07-13 08:20:36.118319] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.735 [2024-07-13 08:20:36.118560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.735 [2024-07-13 08:20:36.118585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.735 [2024-07-13 08:20:36.118602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.735 [2024-07-13 08:20:36.122170] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.735 [2024-07-13 08:20:36.131605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.735 [2024-07-13 08:20:36.132058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.735 [2024-07-13 08:20:36.132096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.735 [2024-07-13 08:20:36.132115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.735 [2024-07-13 08:20:36.132353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.735 [2024-07-13 08:20:36.132595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.735 [2024-07-13 08:20:36.132619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.735 [2024-07-13 08:20:36.132635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.735 [2024-07-13 08:20:36.136197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.735 [2024-07-13 08:20:36.145432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.735 [2024-07-13 08:20:36.145864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.735 [2024-07-13 08:20:36.145905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.735 [2024-07-13 08:20:36.145924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.735 [2024-07-13 08:20:36.146162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.735 [2024-07-13 08:20:36.146405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.735 [2024-07-13 08:20:36.146430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.736 [2024-07-13 08:20:36.146446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.736 [2024-07-13 08:20:36.150007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.736 [2024-07-13 08:20:36.159450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.736 [2024-07-13 08:20:36.159864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.736 [2024-07-13 08:20:36.159905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.736 [2024-07-13 08:20:36.159924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.736 [2024-07-13 08:20:36.160163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.736 [2024-07-13 08:20:36.160406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.736 [2024-07-13 08:20:36.160431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.736 [2024-07-13 08:20:36.160447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.736 [2024-07-13 08:20:36.164010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.736 [2024-07-13 08:20:36.173450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.736 [2024-07-13 08:20:36.173882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.736 [2024-07-13 08:20:36.173914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.736 [2024-07-13 08:20:36.173933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.736 [2024-07-13 08:20:36.174172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.736 [2024-07-13 08:20:36.174420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.736 [2024-07-13 08:20:36.174445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.736 [2024-07-13 08:20:36.174460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.736 [2024-07-13 08:20:36.178026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.736 [2024-07-13 08:20:36.187467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.736 [2024-07-13 08:20:36.187900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.736 [2024-07-13 08:20:36.187933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.736 [2024-07-13 08:20:36.187951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.736 [2024-07-13 08:20:36.188189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.736 [2024-07-13 08:20:36.188432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.736 [2024-07-13 08:20:36.188457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.736 [2024-07-13 08:20:36.188473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.736 [2024-07-13 08:20:36.192037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.736 [2024-07-13 08:20:36.201480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.736 [2024-07-13 08:20:36.201883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.736 [2024-07-13 08:20:36.201915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.736 [2024-07-13 08:20:36.201933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.736 [2024-07-13 08:20:36.202171] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.736 [2024-07-13 08:20:36.202413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.736 [2024-07-13 08:20:36.202438] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.736 [2024-07-13 08:20:36.202454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.736 [2024-07-13 08:20:36.206011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.736 [2024-07-13 08:20:36.215436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.736 [2024-07-13 08:20:36.215879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.736 [2024-07-13 08:20:36.215911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.736 [2024-07-13 08:20:36.215929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.736 [2024-07-13 08:20:36.216166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.736 [2024-07-13 08:20:36.216408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.736 [2024-07-13 08:20:36.216433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.736 [2024-07-13 08:20:36.216450] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.736 [2024-07-13 08:20:36.220016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.736 [2024-07-13 08:20:36.229450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.736 [2024-07-13 08:20:36.229879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.736 [2024-07-13 08:20:36.229911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.736 [2024-07-13 08:20:36.229928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.736 [2024-07-13 08:20:36.230166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.736 [2024-07-13 08:20:36.230408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.736 [2024-07-13 08:20:36.230431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.736 [2024-07-13 08:20:36.230448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.736 [2024-07-13 08:20:36.234006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.736 [2024-07-13 08:20:36.243436] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.736 [2024-07-13 08:20:36.243863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.736 [2024-07-13 08:20:36.243903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.736 [2024-07-13 08:20:36.243922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.736 [2024-07-13 08:20:36.244160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.736 [2024-07-13 08:20:36.244403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.736 [2024-07-13 08:20:36.244427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.736 [2024-07-13 08:20:36.244443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.736 [2024-07-13 08:20:36.248006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.736 [2024-07-13 08:20:36.257468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.736 [2024-07-13 08:20:36.257942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.736 [2024-07-13 08:20:36.257974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.736 [2024-07-13 08:20:36.257992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.736 [2024-07-13 08:20:36.258230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.736 [2024-07-13 08:20:36.258473] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.736 [2024-07-13 08:20:36.258498] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.736 [2024-07-13 08:20:36.258515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.736 [2024-07-13 08:20:36.262084] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.736 [2024-07-13 08:20:36.271315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.736 [2024-07-13 08:20:36.271718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.736 [2024-07-13 08:20:36.271749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.736 [2024-07-13 08:20:36.271773] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.736 [2024-07-13 08:20:36.272021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.736 [2024-07-13 08:20:36.272264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.736 [2024-07-13 08:20:36.272288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.736 [2024-07-13 08:20:36.272305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.736 [2024-07-13 08:20:36.275852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.736 [2024-07-13 08:20:36.285291] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.736 [2024-07-13 08:20:36.285788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.736 [2024-07-13 08:20:36.285837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.736 [2024-07-13 08:20:36.285856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.736 [2024-07-13 08:20:36.286103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.736 [2024-07-13 08:20:36.286346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.736 [2024-07-13 08:20:36.286369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.736 [2024-07-13 08:20:36.286385] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.736 [2024-07-13 08:20:36.289943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.736 [2024-07-13 08:20:36.299166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.736 [2024-07-13 08:20:36.299642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.736 [2024-07-13 08:20:36.299673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.736 [2024-07-13 08:20:36.299691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.736 [2024-07-13 08:20:36.299937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.736 [2024-07-13 08:20:36.300181] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.736 [2024-07-13 08:20:36.300205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.736 [2024-07-13 08:20:36.300221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.737 [2024-07-13 08:20:36.303770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.737 [2024-07-13 08:20:36.313000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.737 [2024-07-13 08:20:36.313490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.737 [2024-07-13 08:20:36.313541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.737 [2024-07-13 08:20:36.313559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.737 [2024-07-13 08:20:36.313797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.737 [2024-07-13 08:20:36.314048] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.737 [2024-07-13 08:20:36.314078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.737 [2024-07-13 08:20:36.314095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.737 [2024-07-13 08:20:36.317654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.737 [2024-07-13 08:20:36.326908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.737 [2024-07-13 08:20:36.327352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.737 [2024-07-13 08:20:36.327383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.737 [2024-07-13 08:20:36.327402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.737 [2024-07-13 08:20:36.327638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.737 [2024-07-13 08:20:36.327889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.737 [2024-07-13 08:20:36.327921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.737 [2024-07-13 08:20:36.327936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.737 [2024-07-13 08:20:36.331512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.737 [2024-07-13 08:20:36.340749] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.737 [2024-07-13 08:20:36.341142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.737 [2024-07-13 08:20:36.341174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.737 [2024-07-13 08:20:36.341193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.737 [2024-07-13 08:20:36.341432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.737 [2024-07-13 08:20:36.341674] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.737 [2024-07-13 08:20:36.341700] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.737 [2024-07-13 08:20:36.341715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.737 [2024-07-13 08:20:36.345289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.737 [2024-07-13 08:20:36.354737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.737 [2024-07-13 08:20:36.355184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.737 [2024-07-13 08:20:36.355216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.737 [2024-07-13 08:20:36.355234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.737 [2024-07-13 08:20:36.355472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.737 [2024-07-13 08:20:36.355713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.737 [2024-07-13 08:20:36.355737] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.737 [2024-07-13 08:20:36.355753] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.737 [2024-07-13 08:20:36.359320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.737 [2024-07-13 08:20:36.368562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.737 [2024-07-13 08:20:36.368993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.737 [2024-07-13 08:20:36.369026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.737 [2024-07-13 08:20:36.369045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.737 [2024-07-13 08:20:36.369282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.737 [2024-07-13 08:20:36.369524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.737 [2024-07-13 08:20:36.369549] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.737 [2024-07-13 08:20:36.369565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.737 [2024-07-13 08:20:36.373125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.737 [2024-07-13 08:20:36.382563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.737 [2024-07-13 08:20:36.382993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.737 [2024-07-13 08:20:36.383026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.737 [2024-07-13 08:20:36.383044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.737 [2024-07-13 08:20:36.383282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.737 [2024-07-13 08:20:36.383523] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.737 [2024-07-13 08:20:36.383548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.737 [2024-07-13 08:20:36.383564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.737 [2024-07-13 08:20:36.387133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.737 [2024-07-13 08:20:36.396569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.737 [2024-07-13 08:20:36.396999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.737 [2024-07-13 08:20:36.397030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.737 [2024-07-13 08:20:36.397048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.737 [2024-07-13 08:20:36.397286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.737 [2024-07-13 08:20:36.397526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.737 [2024-07-13 08:20:36.397551] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.737 [2024-07-13 08:20:36.397567] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.737 [2024-07-13 08:20:36.401128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.737 [2024-07-13 08:20:36.410566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.737 [2024-07-13 08:20:36.410968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.737 [2024-07-13 08:20:36.411002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.737 [2024-07-13 08:20:36.411026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.737 [2024-07-13 08:20:36.411264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.737 [2024-07-13 08:20:36.411505] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.737 [2024-07-13 08:20:36.411530] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.737 [2024-07-13 08:20:36.411546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.737 [2024-07-13 08:20:36.415111] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.737 [2024-07-13 08:20:36.424569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.737 [2024-07-13 08:20:36.424976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.737 [2024-07-13 08:20:36.425009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.737 [2024-07-13 08:20:36.425027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.737 [2024-07-13 08:20:36.425266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.737 [2024-07-13 08:20:36.425507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.737 [2024-07-13 08:20:36.425532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.737 [2024-07-13 08:20:36.425548] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.737 [2024-07-13 08:20:36.429112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.737 [2024-07-13 08:20:36.438556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.737 [2024-07-13 08:20:36.438959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.737 [2024-07-13 08:20:36.438991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.737 [2024-07-13 08:20:36.439010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.737 [2024-07-13 08:20:36.439247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.737 [2024-07-13 08:20:36.439488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.737 [2024-07-13 08:20:36.439514] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.737 [2024-07-13 08:20:36.439530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.737 [2024-07-13 08:20:36.443096] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.737 [2024-07-13 08:20:36.452536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.737 [2024-07-13 08:20:36.452963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.737 [2024-07-13 08:20:36.452995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.737 [2024-07-13 08:20:36.453013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.737 [2024-07-13 08:20:36.453250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.737 [2024-07-13 08:20:36.453492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.737 [2024-07-13 08:20:36.453517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.737 [2024-07-13 08:20:36.453540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.738 [2024-07-13 08:20:36.457193] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.995 [2024-07-13 08:20:36.466391] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.995 [2024-07-13 08:20:36.466944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.995 [2024-07-13 08:20:36.466978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.995 [2024-07-13 08:20:36.466998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.995 [2024-07-13 08:20:36.467236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.995 [2024-07-13 08:20:36.467478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.995 [2024-07-13 08:20:36.467503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.995 [2024-07-13 08:20:36.467518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.995 [2024-07-13 08:20:36.471085] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.996 [2024-07-13 08:20:36.480410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.996 [2024-07-13 08:20:36.480845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.996 [2024-07-13 08:20:36.480887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.996 [2024-07-13 08:20:36.480908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.996 [2024-07-13 08:20:36.481146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.996 [2024-07-13 08:20:36.481388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.996 [2024-07-13 08:20:36.481412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.996 [2024-07-13 08:20:36.481428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.996 [2024-07-13 08:20:36.484992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.996 [2024-07-13 08:20:36.494429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.996 [2024-07-13 08:20:36.494843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.996 [2024-07-13 08:20:36.494884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.996 [2024-07-13 08:20:36.494905] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.996 [2024-07-13 08:20:36.495143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.996 [2024-07-13 08:20:36.495385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.996 [2024-07-13 08:20:36.495409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.996 [2024-07-13 08:20:36.495424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.996 [2024-07-13 08:20:36.498987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.996 [2024-07-13 08:20:36.508426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.996 [2024-07-13 08:20:36.508879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.996 [2024-07-13 08:20:36.508912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.996 [2024-07-13 08:20:36.508930] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.996 [2024-07-13 08:20:36.509168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.996 [2024-07-13 08:20:36.509409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.996 [2024-07-13 08:20:36.509433] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.996 [2024-07-13 08:20:36.509448] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.996 [2024-07-13 08:20:36.513015] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.996 [2024-07-13 08:20:36.522256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.996 [2024-07-13 08:20:36.522710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.996 [2024-07-13 08:20:36.522742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.996 [2024-07-13 08:20:36.522760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.996 [2024-07-13 08:20:36.523011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.996 [2024-07-13 08:20:36.523254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.996 [2024-07-13 08:20:36.523278] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.996 [2024-07-13 08:20:36.523294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.996 [2024-07-13 08:20:36.526849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.996 [2024-07-13 08:20:36.536095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.996 [2024-07-13 08:20:36.536542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.996 [2024-07-13 08:20:36.536574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.996 [2024-07-13 08:20:36.536592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.996 [2024-07-13 08:20:36.536829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.996 [2024-07-13 08:20:36.537080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.996 [2024-07-13 08:20:36.537105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.996 [2024-07-13 08:20:36.537120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.996 [2024-07-13 08:20:36.540674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.996 [2024-07-13 08:20:36.550117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.996 [2024-07-13 08:20:36.550651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.996 [2024-07-13 08:20:36.550705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.996 [2024-07-13 08:20:36.550723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.996 [2024-07-13 08:20:36.550978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.996 [2024-07-13 08:20:36.551221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.996 [2024-07-13 08:20:36.551245] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.996 [2024-07-13 08:20:36.551262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.996 [2024-07-13 08:20:36.554824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.996 [2024-07-13 08:20:36.564060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.996 [2024-07-13 08:20:36.564486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.996 [2024-07-13 08:20:36.564519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.996 [2024-07-13 08:20:36.564537] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.996 [2024-07-13 08:20:36.564774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.996 [2024-07-13 08:20:36.565028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.996 [2024-07-13 08:20:36.565054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.996 [2024-07-13 08:20:36.565069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.996 [2024-07-13 08:20:36.568624] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.996 [2024-07-13 08:20:36.578065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.996 [2024-07-13 08:20:36.578464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.996 [2024-07-13 08:20:36.578496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.996 [2024-07-13 08:20:36.578514] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.996 [2024-07-13 08:20:36.578751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.996 [2024-07-13 08:20:36.579004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.996 [2024-07-13 08:20:36.579029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.996 [2024-07-13 08:20:36.579045] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.996 [2024-07-13 08:20:36.582596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.996 [2024-07-13 08:20:36.592030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.996 [2024-07-13 08:20:36.592457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.996 [2024-07-13 08:20:36.592488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.996 [2024-07-13 08:20:36.592506] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.996 [2024-07-13 08:20:36.592743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.996 [2024-07-13 08:20:36.592997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.996 [2024-07-13 08:20:36.593022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.996 [2024-07-13 08:20:36.593043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.996 [2024-07-13 08:20:36.596594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.996 [2024-07-13 08:20:36.606029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.996 [2024-07-13 08:20:36.606453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.996 [2024-07-13 08:20:36.606485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.996 [2024-07-13 08:20:36.606503] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.996 [2024-07-13 08:20:36.606740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.996 [2024-07-13 08:20:36.606993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.996 [2024-07-13 08:20:36.607018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.996 [2024-07-13 08:20:36.607034] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.996 [2024-07-13 08:20:36.610584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.996 [2024-07-13 08:20:36.620017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.996 [2024-07-13 08:20:36.620430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.996 [2024-07-13 08:20:36.620462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.996 [2024-07-13 08:20:36.620481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.996 [2024-07-13 08:20:36.620718] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.996 [2024-07-13 08:20:36.620972] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.997 [2024-07-13 08:20:36.620997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.997 [2024-07-13 08:20:36.621013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.997 [2024-07-13 08:20:36.624568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.997 [2024-07-13 08:20:36.634007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.997 [2024-07-13 08:20:36.634409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.997 [2024-07-13 08:20:36.634440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.997 [2024-07-13 08:20:36.634458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.997 [2024-07-13 08:20:36.634694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.997 [2024-07-13 08:20:36.634948] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.997 [2024-07-13 08:20:36.634973] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.997 [2024-07-13 08:20:36.634988] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.997 [2024-07-13 08:20:36.638537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.997 [2024-07-13 08:20:36.647968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.997 [2024-07-13 08:20:36.648405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.997 [2024-07-13 08:20:36.648442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.997 [2024-07-13 08:20:36.648461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.997 [2024-07-13 08:20:36.648698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.997 [2024-07-13 08:20:36.648951] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.997 [2024-07-13 08:20:36.648975] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.997 [2024-07-13 08:20:36.648991] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.997 [2024-07-13 08:20:36.652543] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.997 [2024-07-13 08:20:36.661980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.997 [2024-07-13 08:20:36.662381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.997 [2024-07-13 08:20:36.662412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.997 [2024-07-13 08:20:36.662430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.997 [2024-07-13 08:20:36.662668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.997 [2024-07-13 08:20:36.662921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.997 [2024-07-13 08:20:36.662946] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.997 [2024-07-13 08:20:36.662961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.997 [2024-07-13 08:20:36.666511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.997 [2024-07-13 08:20:36.675947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.997 [2024-07-13 08:20:36.676380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.997 [2024-07-13 08:20:36.676411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.997 [2024-07-13 08:20:36.676429] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.997 [2024-07-13 08:20:36.676666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.997 [2024-07-13 08:20:36.676919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.997 [2024-07-13 08:20:36.676944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.997 [2024-07-13 08:20:36.676960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.997 [2024-07-13 08:20:36.680510] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.997 [2024-07-13 08:20:36.689937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.997 [2024-07-13 08:20:36.690372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.997 [2024-07-13 08:20:36.690403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.997 [2024-07-13 08:20:36.690421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.997 [2024-07-13 08:20:36.690658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.997 [2024-07-13 08:20:36.690916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.997 [2024-07-13 08:20:36.690941] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.997 [2024-07-13 08:20:36.690957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.997 [2024-07-13 08:20:36.694509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.997 [2024-07-13 08:20:36.703880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.997 [2024-07-13 08:20:36.704285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.997 [2024-07-13 08:20:36.704317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.997 [2024-07-13 08:20:36.704335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.997 [2024-07-13 08:20:36.704572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.997 [2024-07-13 08:20:36.704814] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.997 [2024-07-13 08:20:36.704838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.997 [2024-07-13 08:20:36.704854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.997 [2024-07-13 08:20:36.708418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:44.997 [2024-07-13 08:20:36.717845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:44.997 [2024-07-13 08:20:36.718287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:44.997 [2024-07-13 08:20:36.718319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:44.997 [2024-07-13 08:20:36.718338] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:44.997 [2024-07-13 08:20:36.718576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:44.997 [2024-07-13 08:20:36.718817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:44.997 [2024-07-13 08:20:36.718841] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:44.997 [2024-07-13 08:20:36.718857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:44.997 [2024-07-13 08:20:36.722429] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.256 [2024-07-13 08:20:36.731924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.256 [2024-07-13 08:20:36.732344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.256 [2024-07-13 08:20:36.732376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.256 [2024-07-13 08:20:36.732394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.256 [2024-07-13 08:20:36.732647] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.256 [2024-07-13 08:20:36.732902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.256 [2024-07-13 08:20:36.732927] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.256 [2024-07-13 08:20:36.732943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.256 [2024-07-13 08:20:36.736501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.256 [2024-07-13 08:20:36.745937] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.256 [2024-07-13 08:20:36.746347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.256 [2024-07-13 08:20:36.746379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.256 [2024-07-13 08:20:36.746397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.256 [2024-07-13 08:20:36.746634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.256 [2024-07-13 08:20:36.746886] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.256 [2024-07-13 08:20:36.746911] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.256 [2024-07-13 08:20:36.746927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.256 [2024-07-13 08:20:36.750478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.256 [2024-07-13 08:20:36.759911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.256 [2024-07-13 08:20:36.760346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.256 [2024-07-13 08:20:36.760377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.256 [2024-07-13 08:20:36.760395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.256 [2024-07-13 08:20:36.760631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.256 [2024-07-13 08:20:36.760884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.256 [2024-07-13 08:20:36.760908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.256 [2024-07-13 08:20:36.760924] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.256 [2024-07-13 08:20:36.764475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.256 [2024-07-13 08:20:36.773915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.256 [2024-07-13 08:20:36.774351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.256 [2024-07-13 08:20:36.774382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.256 [2024-07-13 08:20:36.774400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.256 [2024-07-13 08:20:36.774637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.256 [2024-07-13 08:20:36.774890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.256 [2024-07-13 08:20:36.774915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.256 [2024-07-13 08:20:36.774931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.256 [2024-07-13 08:20:36.778482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.256 [2024-07-13 08:20:36.787917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.256 [2024-07-13 08:20:36.788319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.256 [2024-07-13 08:20:36.788351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.256 [2024-07-13 08:20:36.788375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.256 [2024-07-13 08:20:36.788614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.256 [2024-07-13 08:20:36.788855] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.256 [2024-07-13 08:20:36.788890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.256 [2024-07-13 08:20:36.788907] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.256 [2024-07-13 08:20:36.792463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.256 [2024-07-13 08:20:36.801908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.256 [2024-07-13 08:20:36.802309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.256 [2024-07-13 08:20:36.802340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.256 [2024-07-13 08:20:36.802358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.256 [2024-07-13 08:20:36.802596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.256 [2024-07-13 08:20:36.802838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.256 [2024-07-13 08:20:36.802862] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.256 [2024-07-13 08:20:36.802888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.256 [2024-07-13 08:20:36.806440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.256 [2024-07-13 08:20:36.815844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.256 [2024-07-13 08:20:36.816252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.256 [2024-07-13 08:20:36.816283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.256 [2024-07-13 08:20:36.816301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.256 [2024-07-13 08:20:36.816538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.256 [2024-07-13 08:20:36.816781] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.256 [2024-07-13 08:20:36.816805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.256 [2024-07-13 08:20:36.816821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.256 [2024-07-13 08:20:36.820381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.256 [2024-07-13 08:20:36.829829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.256 [2024-07-13 08:20:36.830264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.256 [2024-07-13 08:20:36.830296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.256 [2024-07-13 08:20:36.830314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.257 [2024-07-13 08:20:36.830551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.257 [2024-07-13 08:20:36.830793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.257 [2024-07-13 08:20:36.830824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.257 [2024-07-13 08:20:36.830841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.257 [2024-07-13 08:20:36.834403] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.257 [2024-07-13 08:20:36.843835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.257 [2024-07-13 08:20:36.844274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.257 [2024-07-13 08:20:36.844305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.257 [2024-07-13 08:20:36.844324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.257 [2024-07-13 08:20:36.844560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.257 [2024-07-13 08:20:36.844802] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.257 [2024-07-13 08:20:36.844826] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.257 [2024-07-13 08:20:36.844842] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.257 [2024-07-13 08:20:36.848398] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.257 [2024-07-13 08:20:36.857829] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.257 [2024-07-13 08:20:36.858262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.257 [2024-07-13 08:20:36.858294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.257 [2024-07-13 08:20:36.858312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.257 [2024-07-13 08:20:36.858549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.257 [2024-07-13 08:20:36.858791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.257 [2024-07-13 08:20:36.858815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.257 [2024-07-13 08:20:36.858831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.257 [2024-07-13 08:20:36.862390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.257 [2024-07-13 08:20:36.871821] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.257 [2024-07-13 08:20:36.872256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.257 [2024-07-13 08:20:36.872288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.257 [2024-07-13 08:20:36.872307] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.257 [2024-07-13 08:20:36.872543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.257 [2024-07-13 08:20:36.872785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.257 [2024-07-13 08:20:36.872809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.257 [2024-07-13 08:20:36.872825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.257 [2024-07-13 08:20:36.876384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.257 [2024-07-13 08:20:36.885823] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.257 [2024-07-13 08:20:36.886264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.257 [2024-07-13 08:20:36.886296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.257 [2024-07-13 08:20:36.886314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.257 [2024-07-13 08:20:36.886552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.257 [2024-07-13 08:20:36.886794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.257 [2024-07-13 08:20:36.886818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.257 [2024-07-13 08:20:36.886834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.257 [2024-07-13 08:20:36.890392] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.257 [2024-07-13 08:20:36.899827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.257 [2024-07-13 08:20:36.900263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.257 [2024-07-13 08:20:36.900295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.257 [2024-07-13 08:20:36.900313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.257 [2024-07-13 08:20:36.900550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.257 [2024-07-13 08:20:36.900792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.257 [2024-07-13 08:20:36.900815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.257 [2024-07-13 08:20:36.900831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.257 [2024-07-13 08:20:36.904393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.257 [2024-07-13 08:20:36.913825] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.257 [2024-07-13 08:20:36.914256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.257 [2024-07-13 08:20:36.914288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.257 [2024-07-13 08:20:36.914306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.257 [2024-07-13 08:20:36.914543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.257 [2024-07-13 08:20:36.914785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.257 [2024-07-13 08:20:36.914809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.257 [2024-07-13 08:20:36.914824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.257 [2024-07-13 08:20:36.918385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.257 [2024-07-13 08:20:36.927820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.257 [2024-07-13 08:20:36.928258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.257 [2024-07-13 08:20:36.928290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.257 [2024-07-13 08:20:36.928308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.257 [2024-07-13 08:20:36.928551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.257 [2024-07-13 08:20:36.928793] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.257 [2024-07-13 08:20:36.928817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.257 [2024-07-13 08:20:36.928832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.257 [2024-07-13 08:20:36.932393] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.257 [2024-07-13 08:20:36.941631] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.257 [2024-07-13 08:20:36.942066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.257 [2024-07-13 08:20:36.942097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.257 [2024-07-13 08:20:36.942115] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.257 [2024-07-13 08:20:36.942352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.257 [2024-07-13 08:20:36.942594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.257 [2024-07-13 08:20:36.942618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.257 [2024-07-13 08:20:36.942634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.257 [2024-07-13 08:20:36.946190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.257 [2024-07-13 08:20:36.955613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.257 [2024-07-13 08:20:36.956032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.257 [2024-07-13 08:20:36.956063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.257 [2024-07-13 08:20:36.956082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.257 [2024-07-13 08:20:36.956320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.257 [2024-07-13 08:20:36.956562] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.257 [2024-07-13 08:20:36.956586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.257 [2024-07-13 08:20:36.956601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.257 [2024-07-13 08:20:36.960163] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.257 [2024-07-13 08:20:36.969591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.257 [2024-07-13 08:20:36.970022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.257 [2024-07-13 08:20:36.970054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.257 [2024-07-13 08:20:36.970072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.257 [2024-07-13 08:20:36.970309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.257 [2024-07-13 08:20:36.970551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.257 [2024-07-13 08:20:36.970575] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.257 [2024-07-13 08:20:36.970597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.257 [2024-07-13 08:20:36.974162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.257 [2024-07-13 08:20:36.983601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.257 [2024-07-13 08:20:36.984034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.257 [2024-07-13 08:20:36.984066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.257 [2024-07-13 08:20:36.984084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.258 [2024-07-13 08:20:36.984322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.258 [2024-07-13 08:20:36.984577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.258 [2024-07-13 08:20:36.984603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.258 [2024-07-13 08:20:36.984618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.517 [2024-07-13 08:20:36.988286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.517 [2024-07-13 08:20:36.997418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.517 [2024-07-13 08:20:36.997847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.517 [2024-07-13 08:20:36.997887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.517 [2024-07-13 08:20:36.997906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.517 [2024-07-13 08:20:36.998143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.517 [2024-07-13 08:20:36.998385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.517 [2024-07-13 08:20:36.998410] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.517 [2024-07-13 08:20:36.998425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.517 [2024-07-13 08:20:37.001985] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.517 [2024-07-13 08:20:37.011412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.517 [2024-07-13 08:20:37.011853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.517 [2024-07-13 08:20:37.011891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.517 [2024-07-13 08:20:37.011910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.517 [2024-07-13 08:20:37.012147] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.517 [2024-07-13 08:20:37.012389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.517 [2024-07-13 08:20:37.012413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.517 [2024-07-13 08:20:37.012429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.517 [2024-07-13 08:20:37.015987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.517 [2024-07-13 08:20:37.025421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.517 [2024-07-13 08:20:37.025833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.517 [2024-07-13 08:20:37.025874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.517 [2024-07-13 08:20:37.025896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.517 [2024-07-13 08:20:37.026134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.517 [2024-07-13 08:20:37.026376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.517 [2024-07-13 08:20:37.026400] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.517 [2024-07-13 08:20:37.026416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.517 [2024-07-13 08:20:37.029973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.517 [2024-07-13 08:20:37.039403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.517 [2024-07-13 08:20:37.039917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.517 [2024-07-13 08:20:37.039950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.517 [2024-07-13 08:20:37.039969] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.517 [2024-07-13 08:20:37.040207] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.517 [2024-07-13 08:20:37.040449] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.517 [2024-07-13 08:20:37.040473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.517 [2024-07-13 08:20:37.040489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.517 [2024-07-13 08:20:37.044049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.517 [2024-07-13 08:20:37.053280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.517 [2024-07-13 08:20:37.053715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.517 [2024-07-13 08:20:37.053747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.517 [2024-07-13 08:20:37.053765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.517 [2024-07-13 08:20:37.054012] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.517 [2024-07-13 08:20:37.054255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.517 [2024-07-13 08:20:37.054279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.517 [2024-07-13 08:20:37.054295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.517 [2024-07-13 08:20:37.057846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.517 [2024-07-13 08:20:37.067281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.517 [2024-07-13 08:20:37.067701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.517 [2024-07-13 08:20:37.067733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.518 [2024-07-13 08:20:37.067751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.518 [2024-07-13 08:20:37.068005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.518 [2024-07-13 08:20:37.068248] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.518 [2024-07-13 08:20:37.068272] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.518 [2024-07-13 08:20:37.068288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.518 [2024-07-13 08:20:37.071835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.518 [2024-07-13 08:20:37.081273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.518 [2024-07-13 08:20:37.081701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.518 [2024-07-13 08:20:37.081732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.518 [2024-07-13 08:20:37.081750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.518 [2024-07-13 08:20:37.081998] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.518 [2024-07-13 08:20:37.082241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.518 [2024-07-13 08:20:37.082265] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.518 [2024-07-13 08:20:37.082281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.518 [2024-07-13 08:20:37.085833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.518 [2024-07-13 08:20:37.095269] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.518 [2024-07-13 08:20:37.095668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.518 [2024-07-13 08:20:37.095700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.518 [2024-07-13 08:20:37.095718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.518 [2024-07-13 08:20:37.095965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.518 [2024-07-13 08:20:37.096208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.518 [2024-07-13 08:20:37.096232] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.518 [2024-07-13 08:20:37.096248] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.518 [2024-07-13 08:20:37.099799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.518 [2024-07-13 08:20:37.109242] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.518 [2024-07-13 08:20:37.109661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.518 [2024-07-13 08:20:37.109692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.518 [2024-07-13 08:20:37.109710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.518 [2024-07-13 08:20:37.109958] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.518 [2024-07-13 08:20:37.110200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.518 [2024-07-13 08:20:37.110224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.518 [2024-07-13 08:20:37.110245] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.518 [2024-07-13 08:20:37.113797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.518 [2024-07-13 08:20:37.123238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.518 [2024-07-13 08:20:37.123638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.518 [2024-07-13 08:20:37.123670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.518 [2024-07-13 08:20:37.123688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.518 [2024-07-13 08:20:37.123935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.518 [2024-07-13 08:20:37.124177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.518 [2024-07-13 08:20:37.124201] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.518 [2024-07-13 08:20:37.124217] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.518 [2024-07-13 08:20:37.127767] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.518 [2024-07-13 08:20:37.137208] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.518 [2024-07-13 08:20:37.137632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.518 [2024-07-13 08:20:37.137683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.518 [2024-07-13 08:20:37.137701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.518 [2024-07-13 08:20:37.137948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.518 [2024-07-13 08:20:37.138191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.518 [2024-07-13 08:20:37.138215] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.518 [2024-07-13 08:20:37.138231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.518 [2024-07-13 08:20:37.141801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.518 [2024-07-13 08:20:37.151083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.518 [2024-07-13 08:20:37.151572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.518 [2024-07-13 08:20:37.151622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.518 [2024-07-13 08:20:37.151640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.518 [2024-07-13 08:20:37.151887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.518 [2024-07-13 08:20:37.152129] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.518 [2024-07-13 08:20:37.152154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.518 [2024-07-13 08:20:37.152170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.518 [2024-07-13 08:20:37.155726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.518 [2024-07-13 08:20:37.164957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.518 [2024-07-13 08:20:37.165430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.518 [2024-07-13 08:20:37.165467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.518 [2024-07-13 08:20:37.165486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.518 [2024-07-13 08:20:37.165724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.518 [2024-07-13 08:20:37.165976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.518 [2024-07-13 08:20:37.166001] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.518 [2024-07-13 08:20:37.166017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.518 [2024-07-13 08:20:37.169571] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.518 [2024-07-13 08:20:37.178802] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.518 [2024-07-13 08:20:37.179215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.518 [2024-07-13 08:20:37.179247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.518 [2024-07-13 08:20:37.179265] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.518 [2024-07-13 08:20:37.179503] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.518 [2024-07-13 08:20:37.179745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.518 [2024-07-13 08:20:37.179768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.518 [2024-07-13 08:20:37.179784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.518 [2024-07-13 08:20:37.183344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.518 [2024-07-13 08:20:37.192776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.518 [2024-07-13 08:20:37.193214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.518 [2024-07-13 08:20:37.193264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.518 [2024-07-13 08:20:37.193283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.518 [2024-07-13 08:20:37.193520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.518 [2024-07-13 08:20:37.193761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.518 [2024-07-13 08:20:37.193785] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.518 [2024-07-13 08:20:37.193800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.518 [2024-07-13 08:20:37.197360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.518 [2024-07-13 08:20:37.206795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.518 [2024-07-13 08:20:37.207239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.518 [2024-07-13 08:20:37.207270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.518 [2024-07-13 08:20:37.207288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.518 [2024-07-13 08:20:37.207526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.518 [2024-07-13 08:20:37.207774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.518 [2024-07-13 08:20:37.207798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.518 [2024-07-13 08:20:37.207814] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.518 [2024-07-13 08:20:37.211376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.518 [2024-07-13 08:20:37.220629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.518 [2024-07-13 08:20:37.221019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.519 [2024-07-13 08:20:37.221051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.519 [2024-07-13 08:20:37.221069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.519 [2024-07-13 08:20:37.221307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.519 [2024-07-13 08:20:37.221550] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.519 [2024-07-13 08:20:37.221574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.519 [2024-07-13 08:20:37.221589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.519 [2024-07-13 08:20:37.225154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.519 [2024-07-13 08:20:37.234590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.519 [2024-07-13 08:20:37.234978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.519 [2024-07-13 08:20:37.235011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.519 [2024-07-13 08:20:37.235028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.519 [2024-07-13 08:20:37.235267] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.519 [2024-07-13 08:20:37.235509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.519 [2024-07-13 08:20:37.235533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.519 [2024-07-13 08:20:37.235550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.519 [2024-07-13 08:20:37.239110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.519 [2024-07-13 08:20:37.248725] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.778 [2024-07-13 08:20:37.249123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.778 [2024-07-13 08:20:37.249156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.778 [2024-07-13 08:20:37.249175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.778 [2024-07-13 08:20:37.249412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.778 [2024-07-13 08:20:37.249668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.778 [2024-07-13 08:20:37.249694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.778 [2024-07-13 08:20:37.249710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.778 [2024-07-13 08:20:37.253328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.778 [2024-07-13 08:20:37.262609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.778 [2024-07-13 08:20:37.263057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.778 [2024-07-13 08:20:37.263089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.778 [2024-07-13 08:20:37.263107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.778 [2024-07-13 08:20:37.263345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.778 [2024-07-13 08:20:37.263588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.778 [2024-07-13 08:20:37.263612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.778 [2024-07-13 08:20:37.263628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.778 [2024-07-13 08:20:37.267190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.778 [2024-07-13 08:20:37.276624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.778 [2024-07-13 08:20:37.277039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.778 [2024-07-13 08:20:37.277071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.778 [2024-07-13 08:20:37.277089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.778 [2024-07-13 08:20:37.277325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.778 [2024-07-13 08:20:37.277567] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.778 [2024-07-13 08:20:37.277590] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.778 [2024-07-13 08:20:37.277606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.778 [2024-07-13 08:20:37.281167] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.778 [2024-07-13 08:20:37.290597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.778 [2024-07-13 08:20:37.291029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.778 [2024-07-13 08:20:37.291061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.778 [2024-07-13 08:20:37.291080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.778 [2024-07-13 08:20:37.291317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.778 [2024-07-13 08:20:37.291559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.778 [2024-07-13 08:20:37.291582] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.778 [2024-07-13 08:20:37.291598] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.778 [2024-07-13 08:20:37.295159] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.778 [2024-07-13 08:20:37.304592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.778 [2024-07-13 08:20:37.305025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.778 [2024-07-13 08:20:37.305056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.778 [2024-07-13 08:20:37.305080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.778 [2024-07-13 08:20:37.305318] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.778 [2024-07-13 08:20:37.305560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.778 [2024-07-13 08:20:37.305584] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.778 [2024-07-13 08:20:37.305600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.778 [2024-07-13 08:20:37.309162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.778 [2024-07-13 08:20:37.318598] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.778 [2024-07-13 08:20:37.319011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.778 [2024-07-13 08:20:37.319043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.778 [2024-07-13 08:20:37.319061] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.778 [2024-07-13 08:20:37.319298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.778 [2024-07-13 08:20:37.319540] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.778 [2024-07-13 08:20:37.319564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.778 [2024-07-13 08:20:37.319580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.778 [2024-07-13 08:20:37.323143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.779 [2024-07-13 08:20:37.332572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.779 [2024-07-13 08:20:37.332960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.779 [2024-07-13 08:20:37.332992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.779 [2024-07-13 08:20:37.333010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.779 [2024-07-13 08:20:37.333247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.779 [2024-07-13 08:20:37.333489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.779 [2024-07-13 08:20:37.333513] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.779 [2024-07-13 08:20:37.333529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.779 [2024-07-13 08:20:37.337092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.779 [2024-07-13 08:20:37.346522] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.779 [2024-07-13 08:20:37.346959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.779 [2024-07-13 08:20:37.346991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.779 [2024-07-13 08:20:37.347009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.779 [2024-07-13 08:20:37.347247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.779 [2024-07-13 08:20:37.347489] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.779 [2024-07-13 08:20:37.347518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.779 [2024-07-13 08:20:37.347535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.779 [2024-07-13 08:20:37.351107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.779 [2024-07-13 08:20:37.360339] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.779 [2024-07-13 08:20:37.360744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.779 [2024-07-13 08:20:37.360776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.779 [2024-07-13 08:20:37.360795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.779 [2024-07-13 08:20:37.361044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.779 [2024-07-13 08:20:37.361287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.779 [2024-07-13 08:20:37.361311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.779 [2024-07-13 08:20:37.361327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.779 [2024-07-13 08:20:37.364884] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.779 [2024-07-13 08:20:37.374317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.779 [2024-07-13 08:20:37.374743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.779 [2024-07-13 08:20:37.374775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.779 [2024-07-13 08:20:37.374803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.779 [2024-07-13 08:20:37.375049] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.779 [2024-07-13 08:20:37.375291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.779 [2024-07-13 08:20:37.375315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.779 [2024-07-13 08:20:37.375331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.779 [2024-07-13 08:20:37.378886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.779 [2024-07-13 08:20:37.388315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.779 [2024-07-13 08:20:37.388748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.779 [2024-07-13 08:20:37.388779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.779 [2024-07-13 08:20:37.388797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.779 [2024-07-13 08:20:37.389043] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.779 [2024-07-13 08:20:37.389286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.779 [2024-07-13 08:20:37.389310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.779 [2024-07-13 08:20:37.389326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.779 [2024-07-13 08:20:37.392882] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.779 [2024-07-13 08:20:37.402319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.779 [2024-07-13 08:20:37.402745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.779 [2024-07-13 08:20:37.402776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.779 [2024-07-13 08:20:37.402795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.779 [2024-07-13 08:20:37.403044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.779 [2024-07-13 08:20:37.403287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.779 [2024-07-13 08:20:37.403311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.779 [2024-07-13 08:20:37.403326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.779 [2024-07-13 08:20:37.406905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.779 [2024-07-13 08:20:37.416337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.779 [2024-07-13 08:20:37.416813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.779 [2024-07-13 08:20:37.416877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.779 [2024-07-13 08:20:37.416898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.779 [2024-07-13 08:20:37.417145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.779 [2024-07-13 08:20:37.417389] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.779 [2024-07-13 08:20:37.417413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.779 [2024-07-13 08:20:37.417429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.779 [2024-07-13 08:20:37.420987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.779 [2024-07-13 08:20:37.430212] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.779 [2024-07-13 08:20:37.430719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.779 [2024-07-13 08:20:37.430751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.779 [2024-07-13 08:20:37.430778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.779 [2024-07-13 08:20:37.431025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.779 [2024-07-13 08:20:37.431267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.779 [2024-07-13 08:20:37.431291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.779 [2024-07-13 08:20:37.431306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.779 [2024-07-13 08:20:37.434856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.779 [2024-07-13 08:20:37.444083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.779 [2024-07-13 08:20:37.444537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.779 [2024-07-13 08:20:37.444586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.779 [2024-07-13 08:20:37.444612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.779 [2024-07-13 08:20:37.444856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.779 [2024-07-13 08:20:37.445107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.779 [2024-07-13 08:20:37.445132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.779 [2024-07-13 08:20:37.445148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.779 [2024-07-13 08:20:37.448705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.779 [2024-07-13 08:20:37.457930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.779 [2024-07-13 08:20:37.458338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.779 [2024-07-13 08:20:37.458370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.779 [2024-07-13 08:20:37.458388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.779 [2024-07-13 08:20:37.458625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.779 [2024-07-13 08:20:37.458877] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.779 [2024-07-13 08:20:37.458903] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.779 [2024-07-13 08:20:37.458919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.779 [2024-07-13 08:20:37.462474] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.779 [2024-07-13 08:20:37.471947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.779 [2024-07-13 08:20:37.472464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.779 [2024-07-13 08:20:37.472517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.779 [2024-07-13 08:20:37.472535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.779 [2024-07-13 08:20:37.472772] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.779 [2024-07-13 08:20:37.473023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.779 [2024-07-13 08:20:37.473048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.779 [2024-07-13 08:20:37.473063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.780 [2024-07-13 08:20:37.476616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.780 [2024-07-13 08:20:37.485841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.780 [2024-07-13 08:20:37.486367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.780 [2024-07-13 08:20:37.486416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.780 [2024-07-13 08:20:37.486434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.780 [2024-07-13 08:20:37.486672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.780 [2024-07-13 08:20:37.486924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.780 [2024-07-13 08:20:37.486949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.780 [2024-07-13 08:20:37.486970] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.780 [2024-07-13 08:20:37.490521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:45.780 [2024-07-13 08:20:37.499741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:45.780 [2024-07-13 08:20:37.500157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:45.780 [2024-07-13 08:20:37.500190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:45.780 [2024-07-13 08:20:37.500213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:45.780 [2024-07-13 08:20:37.500450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:45.780 [2024-07-13 08:20:37.500691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:45.780 [2024-07-13 08:20:37.500715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:45.780 [2024-07-13 08:20:37.500731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:45.780 [2024-07-13 08:20:37.504291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.039 [2024-07-13 08:20:37.513757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.039 [2024-07-13 08:20:37.514234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.039 [2024-07-13 08:20:37.514301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.039 [2024-07-13 08:20:37.514327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.039 [2024-07-13 08:20:37.514594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.039 [2024-07-13 08:20:37.514842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.039 [2024-07-13 08:20:37.514878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.039 [2024-07-13 08:20:37.514904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.039 [2024-07-13 08:20:37.518471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.039 [2024-07-13 08:20:37.527702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.039 [2024-07-13 08:20:37.528206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.039 [2024-07-13 08:20:37.528256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.039 [2024-07-13 08:20:37.528275] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.039 [2024-07-13 08:20:37.528513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.039 [2024-07-13 08:20:37.528755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.039 [2024-07-13 08:20:37.528779] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.039 [2024-07-13 08:20:37.528795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.039 [2024-07-13 08:20:37.532357] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.039 [2024-07-13 08:20:37.541579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.039 [2024-07-13 08:20:37.542028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.039 [2024-07-13 08:20:37.542061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.039 [2024-07-13 08:20:37.542079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.039 [2024-07-13 08:20:37.542316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.039 [2024-07-13 08:20:37.542557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.039 [2024-07-13 08:20:37.542581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.039 [2024-07-13 08:20:37.542596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.039 [2024-07-13 08:20:37.546158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.039 [2024-07-13 08:20:37.555601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.039 [2024-07-13 08:20:37.556045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.039 [2024-07-13 08:20:37.556078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.039 [2024-07-13 08:20:37.556096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.039 [2024-07-13 08:20:37.556333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.039 [2024-07-13 08:20:37.556575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.039 [2024-07-13 08:20:37.556599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.039 [2024-07-13 08:20:37.556614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.039 [2024-07-13 08:20:37.560171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.039 [2024-07-13 08:20:37.569611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.039 [2024-07-13 08:20:37.570044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.039 [2024-07-13 08:20:37.570076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.039 [2024-07-13 08:20:37.570094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.039 [2024-07-13 08:20:37.570331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.039 [2024-07-13 08:20:37.570573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.039 [2024-07-13 08:20:37.570596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.039 [2024-07-13 08:20:37.570612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.039 [2024-07-13 08:20:37.574181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.039 [2024-07-13 08:20:37.583611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.039 [2024-07-13 08:20:37.584040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.039 [2024-07-13 08:20:37.584072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.039 [2024-07-13 08:20:37.584090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.039 [2024-07-13 08:20:37.584327] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.039 [2024-07-13 08:20:37.584575] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.039 [2024-07-13 08:20:37.584599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.039 [2024-07-13 08:20:37.584615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.039 [2024-07-13 08:20:37.588175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.039 [2024-07-13 08:20:37.597602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.039 [2024-07-13 08:20:37.598034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.039 [2024-07-13 08:20:37.598066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.039 [2024-07-13 08:20:37.598083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.039 [2024-07-13 08:20:37.598321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.039 [2024-07-13 08:20:37.598563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.039 [2024-07-13 08:20:37.598586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.039 [2024-07-13 08:20:37.598602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.039 [2024-07-13 08:20:37.602162] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.039 [2024-07-13 08:20:37.611600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.039 [2024-07-13 08:20:37.612163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.039 [2024-07-13 08:20:37.612227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.040 [2024-07-13 08:20:37.612245] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.040 [2024-07-13 08:20:37.612482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.040 [2024-07-13 08:20:37.612724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.040 [2024-07-13 08:20:37.612748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.040 [2024-07-13 08:20:37.612764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.040 [2024-07-13 08:20:37.616325] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.040 [2024-07-13 08:20:37.625547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.040 [2024-07-13 08:20:37.625994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.040 [2024-07-13 08:20:37.626027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.040 [2024-07-13 08:20:37.626045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.040 [2024-07-13 08:20:37.626282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.040 [2024-07-13 08:20:37.626524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.040 [2024-07-13 08:20:37.626548] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.040 [2024-07-13 08:20:37.626564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.040 [2024-07-13 08:20:37.630133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.040 [2024-07-13 08:20:37.639559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.040 [2024-07-13 08:20:37.639991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.040 [2024-07-13 08:20:37.640022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.040 [2024-07-13 08:20:37.640040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.040 [2024-07-13 08:20:37.640277] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.040 [2024-07-13 08:20:37.640520] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.040 [2024-07-13 08:20:37.640543] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.040 [2024-07-13 08:20:37.640559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.040 [2024-07-13 08:20:37.644119] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.040 [2024-07-13 08:20:37.653547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.040 [2024-07-13 08:20:37.653972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.040 [2024-07-13 08:20:37.654004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.040 [2024-07-13 08:20:37.654022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.040 [2024-07-13 08:20:37.654260] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.040 [2024-07-13 08:20:37.654502] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.040 [2024-07-13 08:20:37.654525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.040 [2024-07-13 08:20:37.654541] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.040 [2024-07-13 08:20:37.658110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.040 [2024-07-13 08:20:37.667357] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.040 [2024-07-13 08:20:37.667786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.040 [2024-07-13 08:20:37.667819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.040 [2024-07-13 08:20:37.667838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.040 [2024-07-13 08:20:37.668087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.040 [2024-07-13 08:20:37.668329] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.040 [2024-07-13 08:20:37.668353] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.040 [2024-07-13 08:20:37.668369] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.040 [2024-07-13 08:20:37.671926] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.040 [2024-07-13 08:20:37.681358] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.040 [2024-07-13 08:20:37.681785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.040 [2024-07-13 08:20:37.681822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.040 [2024-07-13 08:20:37.681841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.040 [2024-07-13 08:20:37.682089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.040 [2024-07-13 08:20:37.682332] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.040 [2024-07-13 08:20:37.682356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.040 [2024-07-13 08:20:37.682371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.040 [2024-07-13 08:20:37.685929] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.040 [2024-07-13 08:20:37.695359] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.040 [2024-07-13 08:20:37.695765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.040 [2024-07-13 08:20:37.695796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.040 [2024-07-13 08:20:37.695814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.040 [2024-07-13 08:20:37.696062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.040 [2024-07-13 08:20:37.696305] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.040 [2024-07-13 08:20:37.696329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.040 [2024-07-13 08:20:37.696345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.040 [2024-07-13 08:20:37.699903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.040 [2024-07-13 08:20:37.709331] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.040 [2024-07-13 08:20:37.709766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.040 [2024-07-13 08:20:37.709797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.040 [2024-07-13 08:20:37.709821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.040 [2024-07-13 08:20:37.710069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.040 [2024-07-13 08:20:37.710312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.040 [2024-07-13 08:20:37.710336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.040 [2024-07-13 08:20:37.710352] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.040 [2024-07-13 08:20:37.713909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.040 [2024-07-13 08:20:37.723345] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.040 [2024-07-13 08:20:37.723754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.040 [2024-07-13 08:20:37.723785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.040 [2024-07-13 08:20:37.723803] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.040 [2024-07-13 08:20:37.724084] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.040 [2024-07-13 08:20:37.724333] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.040 [2024-07-13 08:20:37.724357] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.040 [2024-07-13 08:20:37.724373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.040 [2024-07-13 08:20:37.727936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.040 [2024-07-13 08:20:37.737169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.040 [2024-07-13 08:20:37.737571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.040 [2024-07-13 08:20:37.737604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.040 [2024-07-13 08:20:37.737622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.040 [2024-07-13 08:20:37.737860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.040 [2024-07-13 08:20:37.738128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.040 [2024-07-13 08:20:37.738152] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.040 [2024-07-13 08:20:37.738167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.040 [2024-07-13 08:20:37.741721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.040 [2024-07-13 08:20:37.751172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.040 [2024-07-13 08:20:37.751574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.040 [2024-07-13 08:20:37.751607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.040 [2024-07-13 08:20:37.751625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.040 [2024-07-13 08:20:37.751863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.040 [2024-07-13 08:20:37.752120] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.040 [2024-07-13 08:20:37.752145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.040 [2024-07-13 08:20:37.752161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.040 [2024-07-13 08:20:37.755713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.040 [2024-07-13 08:20:37.765160] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.040 [2024-07-13 08:20:37.765574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.041 [2024-07-13 08:20:37.765606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.041 [2024-07-13 08:20:37.765625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.041 [2024-07-13 08:20:37.765863] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.041 [2024-07-13 08:20:37.766117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.041 [2024-07-13 08:20:37.766142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.041 [2024-07-13 08:20:37.766158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.041 [2024-07-13 08:20:37.769823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.303 [2024-07-13 08:20:37.779246] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.303 [2024-07-13 08:20:37.779679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.303 [2024-07-13 08:20:37.779713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.303 [2024-07-13 08:20:37.779732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.303 [2024-07-13 08:20:37.779988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.303 [2024-07-13 08:20:37.780230] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.303 [2024-07-13 08:20:37.780255] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.303 [2024-07-13 08:20:37.780272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.303 [2024-07-13 08:20:37.783828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.303 [2024-07-13 08:20:37.793092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.303 [2024-07-13 08:20:37.793523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.303 [2024-07-13 08:20:37.793555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.303 [2024-07-13 08:20:37.793574] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.303 [2024-07-13 08:20:37.793813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.303 [2024-07-13 08:20:37.794067] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.303 [2024-07-13 08:20:37.794091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.303 [2024-07-13 08:20:37.794107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.303 [2024-07-13 08:20:37.797658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.303 [2024-07-13 08:20:37.807121] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.303 [2024-07-13 08:20:37.807527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.303 [2024-07-13 08:20:37.807560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.303 [2024-07-13 08:20:37.807580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.303 [2024-07-13 08:20:37.807818] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.303 [2024-07-13 08:20:37.808073] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.303 [2024-07-13 08:20:37.808100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.303 [2024-07-13 08:20:37.808116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.303 [2024-07-13 08:20:37.811672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.303 [2024-07-13 08:20:37.821127] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.303 [2024-07-13 08:20:37.821552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.303 [2024-07-13 08:20:37.821584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.303 [2024-07-13 08:20:37.821608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.303 [2024-07-13 08:20:37.821846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.303 [2024-07-13 08:20:37.822106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.303 [2024-07-13 08:20:37.822132] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.303 [2024-07-13 08:20:37.822149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.303 [2024-07-13 08:20:37.825709] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.303 [2024-07-13 08:20:37.834953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.303 [2024-07-13 08:20:37.835391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.303 [2024-07-13 08:20:37.835424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.304 [2024-07-13 08:20:37.835443] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.304 [2024-07-13 08:20:37.835682] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.304 [2024-07-13 08:20:37.835938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.304 [2024-07-13 08:20:37.835964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.304 [2024-07-13 08:20:37.835980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.304 [2024-07-13 08:20:37.839537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.304 [2024-07-13 08:20:37.848779] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.304 [2024-07-13 08:20:37.849199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.304 [2024-07-13 08:20:37.849231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.304 [2024-07-13 08:20:37.849249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.304 [2024-07-13 08:20:37.849486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.304 [2024-07-13 08:20:37.849738] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.304 [2024-07-13 08:20:37.849763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.304 [2024-07-13 08:20:37.849779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.304 [2024-07-13 08:20:37.853365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.304 [2024-07-13 08:20:37.862605] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.304 [2024-07-13 08:20:37.863005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.304 [2024-07-13 08:20:37.863037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.304 [2024-07-13 08:20:37.863056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.304 [2024-07-13 08:20:37.863294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.304 [2024-07-13 08:20:37.863537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.304 [2024-07-13 08:20:37.863568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.304 [2024-07-13 08:20:37.863585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.304 [2024-07-13 08:20:37.867151] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.304 [2024-07-13 08:20:37.876604] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.304 [2024-07-13 08:20:37.877040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.304 [2024-07-13 08:20:37.877073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.304 [2024-07-13 08:20:37.877092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.304 [2024-07-13 08:20:37.877330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.304 [2024-07-13 08:20:37.877572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.304 [2024-07-13 08:20:37.877597] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.304 [2024-07-13 08:20:37.877615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.304 [2024-07-13 08:20:37.881182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.304 [2024-07-13 08:20:37.890618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.304 [2024-07-13 08:20:37.891056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.304 [2024-07-13 08:20:37.891089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.304 [2024-07-13 08:20:37.891107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.304 [2024-07-13 08:20:37.891346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.304 [2024-07-13 08:20:37.891588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.304 [2024-07-13 08:20:37.891613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.304 [2024-07-13 08:20:37.891629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.304 [2024-07-13 08:20:37.895192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.304 [2024-07-13 08:20:37.904626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.304 [2024-07-13 08:20:37.905062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.304 [2024-07-13 08:20:37.905094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.304 [2024-07-13 08:20:37.905113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.304 [2024-07-13 08:20:37.905352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.304 [2024-07-13 08:20:37.905595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.304 [2024-07-13 08:20:37.905620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.304 [2024-07-13 08:20:37.905636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.304 [2024-07-13 08:20:37.909201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.304 [2024-07-13 08:20:37.918641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.304 [2024-07-13 08:20:37.919093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.304 [2024-07-13 08:20:37.919126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.304 [2024-07-13 08:20:37.919145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.304 [2024-07-13 08:20:37.919384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.304 [2024-07-13 08:20:37.919627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.304 [2024-07-13 08:20:37.919652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.304 [2024-07-13 08:20:37.919668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.304 [2024-07-13 08:20:37.923242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.304 [2024-07-13 08:20:37.932471] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.304 [2024-07-13 08:20:37.932878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.304 [2024-07-13 08:20:37.932910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.304 [2024-07-13 08:20:37.932928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.304 [2024-07-13 08:20:37.933164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.304 [2024-07-13 08:20:37.933406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.304 [2024-07-13 08:20:37.933431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.304 [2024-07-13 08:20:37.933447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.304 [2024-07-13 08:20:37.937008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.304 [2024-07-13 08:20:37.946449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.304 [2024-07-13 08:20:37.946877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.304 [2024-07-13 08:20:37.946909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.304 [2024-07-13 08:20:37.946927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.304 [2024-07-13 08:20:37.947164] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.304 [2024-07-13 08:20:37.947405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.304 [2024-07-13 08:20:37.947430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.304 [2024-07-13 08:20:37.947447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.304 [2024-07-13 08:20:37.951013] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.304 [2024-07-13 08:20:37.960451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.304 [2024-07-13 08:20:37.960881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.304 [2024-07-13 08:20:37.960913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.304 [2024-07-13 08:20:37.960932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.304 [2024-07-13 08:20:37.961177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.304 [2024-07-13 08:20:37.961419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.304 [2024-07-13 08:20:37.961444] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.304 [2024-07-13 08:20:37.961460] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.304 [2024-07-13 08:20:37.965026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.304 [2024-07-13 08:20:37.974466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.304 [2024-07-13 08:20:37.974925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.304 [2024-07-13 08:20:37.974959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.304 [2024-07-13 08:20:37.974977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.304 [2024-07-13 08:20:37.975216] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.304 [2024-07-13 08:20:37.975457] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.304 [2024-07-13 08:20:37.975482] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.304 [2024-07-13 08:20:37.975498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.304 [2024-07-13 08:20:37.979064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.304 [2024-07-13 08:20:37.988504] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.304 [2024-07-13 08:20:37.988947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.305 [2024-07-13 08:20:37.988980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.305 [2024-07-13 08:20:37.988998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.305 [2024-07-13 08:20:37.989236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.305 [2024-07-13 08:20:37.989478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.305 [2024-07-13 08:20:37.989503] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.305 [2024-07-13 08:20:37.989519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.305 [2024-07-13 08:20:37.993086] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.305 [2024-07-13 08:20:38.002327] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.305 [2024-07-13 08:20:38.002763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.305 [2024-07-13 08:20:38.002795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.305 [2024-07-13 08:20:38.002814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.305 [2024-07-13 08:20:38.003064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.305 [2024-07-13 08:20:38.003306] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.305 [2024-07-13 08:20:38.003331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.305 [2024-07-13 08:20:38.003353] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.305 [2024-07-13 08:20:38.006917] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.305 [2024-07-13 08:20:38.016150] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.305 [2024-07-13 08:20:38.016585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.305 [2024-07-13 08:20:38.016617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.305 [2024-07-13 08:20:38.016635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.305 [2024-07-13 08:20:38.016884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.305 [2024-07-13 08:20:38.017126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.305 [2024-07-13 08:20:38.017151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.305 [2024-07-13 08:20:38.017168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.305 [2024-07-13 08:20:38.020722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.305 [2024-07-13 08:20:38.030292] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.305 [2024-07-13 08:20:38.030768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.305 [2024-07-13 08:20:38.030808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.305 [2024-07-13 08:20:38.030836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.305 [2024-07-13 08:20:38.031088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.305 [2024-07-13 08:20:38.031357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.305 [2024-07-13 08:20:38.031384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.305 [2024-07-13 08:20:38.031401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.570 [2024-07-13 08:20:38.035313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.570 [2024-07-13 08:20:38.044321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.570 [2024-07-13 08:20:38.044829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.570 [2024-07-13 08:20:38.044902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.571 [2024-07-13 08:20:38.044926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.571 [2024-07-13 08:20:38.045165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.571 [2024-07-13 08:20:38.045422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.571 [2024-07-13 08:20:38.045449] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.571 [2024-07-13 08:20:38.045466] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.571 [2024-07-13 08:20:38.049173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.571 [2024-07-13 08:20:38.058382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.571 [2024-07-13 08:20:38.058905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.571 [2024-07-13 08:20:38.058962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.571 [2024-07-13 08:20:38.058982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.571 [2024-07-13 08:20:38.059220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.571 [2024-07-13 08:20:38.059460] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.571 [2024-07-13 08:20:38.059486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.571 [2024-07-13 08:20:38.059502] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.571 [2024-07-13 08:20:38.063071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.571 [2024-07-13 08:20:38.072309] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.571 [2024-07-13 08:20:38.072736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.571 [2024-07-13 08:20:38.072768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.571 [2024-07-13 08:20:38.072786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.571 [2024-07-13 08:20:38.073035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.571 [2024-07-13 08:20:38.073278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.571 [2024-07-13 08:20:38.073302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.571 [2024-07-13 08:20:38.073319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.571 [2024-07-13 08:20:38.076883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.571 [2024-07-13 08:20:38.086137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.571 [2024-07-13 08:20:38.086621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.571 [2024-07-13 08:20:38.086653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.571 [2024-07-13 08:20:38.086672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.571 [2024-07-13 08:20:38.086924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.571 [2024-07-13 08:20:38.087165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.571 [2024-07-13 08:20:38.087190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.571 [2024-07-13 08:20:38.087206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.571 [2024-07-13 08:20:38.090761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.571 [2024-07-13 08:20:38.099998] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.571 [2024-07-13 08:20:38.100437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.571 [2024-07-13 08:20:38.100470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.571 [2024-07-13 08:20:38.100488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.571 [2024-07-13 08:20:38.100725] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.571 [2024-07-13 08:20:38.100986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.571 [2024-07-13 08:20:38.101013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.571 [2024-07-13 08:20:38.101029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.571 [2024-07-13 08:20:38.104583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.571 [2024-07-13 08:20:38.113813] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.571 [2024-07-13 08:20:38.114257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.571 [2024-07-13 08:20:38.114290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.571 [2024-07-13 08:20:38.114308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.571 [2024-07-13 08:20:38.114545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.571 [2024-07-13 08:20:38.114786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.571 [2024-07-13 08:20:38.114811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.571 [2024-07-13 08:20:38.114828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.571 [2024-07-13 08:20:38.118394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.571 [2024-07-13 08:20:38.127632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.571 [2024-07-13 08:20:38.128079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.571 [2024-07-13 08:20:38.128111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.571 [2024-07-13 08:20:38.128129] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.571 [2024-07-13 08:20:38.128366] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.571 [2024-07-13 08:20:38.128608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.571 [2024-07-13 08:20:38.128633] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.571 [2024-07-13 08:20:38.128649] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.571 [2024-07-13 08:20:38.132217] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.571 [2024-07-13 08:20:38.141652] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.571 [2024-07-13 08:20:38.142090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.571 [2024-07-13 08:20:38.142122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.571 [2024-07-13 08:20:38.142141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.571 [2024-07-13 08:20:38.142378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.571 [2024-07-13 08:20:38.142619] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.571 [2024-07-13 08:20:38.142644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.571 [2024-07-13 08:20:38.142660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.571 [2024-07-13 08:20:38.146234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.571 [2024-07-13 08:20:38.155674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.571 [2024-07-13 08:20:38.156107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.571 [2024-07-13 08:20:38.156139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.571 [2024-07-13 08:20:38.156158] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.571 [2024-07-13 08:20:38.156396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.571 [2024-07-13 08:20:38.156637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.571 [2024-07-13 08:20:38.156662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.571 [2024-07-13 08:20:38.156678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.571 [2024-07-13 08:20:38.160243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.571 [2024-07-13 08:20:38.169681] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.571 [2024-07-13 08:20:38.170124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.571 [2024-07-13 08:20:38.170158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.571 [2024-07-13 08:20:38.170176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.571 [2024-07-13 08:20:38.170413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.571 [2024-07-13 08:20:38.170655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.571 [2024-07-13 08:20:38.170680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.571 [2024-07-13 08:20:38.170697] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.571 [2024-07-13 08:20:38.174264] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.571 [2024-07-13 08:20:38.183702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.571 [2024-07-13 08:20:38.184153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.571 [2024-07-13 08:20:38.184185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.571 [2024-07-13 08:20:38.184203] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.571 [2024-07-13 08:20:38.184440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.571 [2024-07-13 08:20:38.184681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.571 [2024-07-13 08:20:38.184706] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.571 [2024-07-13 08:20:38.184722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.571 [2024-07-13 08:20:38.188289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.571 [2024-07-13 08:20:38.197726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.571 [2024-07-13 08:20:38.198159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.572 [2024-07-13 08:20:38.198191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.572 [2024-07-13 08:20:38.198215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.572 [2024-07-13 08:20:38.198453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.572 [2024-07-13 08:20:38.198694] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.572 [2024-07-13 08:20:38.198718] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.572 [2024-07-13 08:20:38.198733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.572 [2024-07-13 08:20:38.202301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.572 [2024-07-13 08:20:38.211743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.572 [2024-07-13 08:20:38.212177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.572 [2024-07-13 08:20:38.212210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.572 [2024-07-13 08:20:38.212227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.572 [2024-07-13 08:20:38.212465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.572 [2024-07-13 08:20:38.212707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.572 [2024-07-13 08:20:38.212732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.572 [2024-07-13 08:20:38.212747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.572 [2024-07-13 08:20:38.216314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.572 [2024-07-13 08:20:38.225758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.572 [2024-07-13 08:20:38.226196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.572 [2024-07-13 08:20:38.226229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.572 [2024-07-13 08:20:38.226247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.572 [2024-07-13 08:20:38.226484] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.572 [2024-07-13 08:20:38.226726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.572 [2024-07-13 08:20:38.226752] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.572 [2024-07-13 08:20:38.226767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.572 [2024-07-13 08:20:38.230333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.572 [2024-07-13 08:20:38.239772] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.572 [2024-07-13 08:20:38.240196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.572 [2024-07-13 08:20:38.240228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.572 [2024-07-13 08:20:38.240246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.572 [2024-07-13 08:20:38.240483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.572 [2024-07-13 08:20:38.240725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.572 [2024-07-13 08:20:38.240755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.572 [2024-07-13 08:20:38.240771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.572 [2024-07-13 08:20:38.244333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.572 [2024-07-13 08:20:38.253777] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.572 [2024-07-13 08:20:38.254192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.572 [2024-07-13 08:20:38.254243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.572 [2024-07-13 08:20:38.254262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.572 [2024-07-13 08:20:38.254498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.572 [2024-07-13 08:20:38.254739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.572 [2024-07-13 08:20:38.254764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.572 [2024-07-13 08:20:38.254780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.572 [2024-07-13 08:20:38.258344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.572 [2024-07-13 08:20:38.267818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.572 [2024-07-13 08:20:38.268239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.572 [2024-07-13 08:20:38.268271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.572 [2024-07-13 08:20:38.268289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.572 [2024-07-13 08:20:38.268527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.572 [2024-07-13 08:20:38.268768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.572 [2024-07-13 08:20:38.268793] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.572 [2024-07-13 08:20:38.268808] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.572 [2024-07-13 08:20:38.272367] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.572 [2024-07-13 08:20:38.281801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.572 [2024-07-13 08:20:38.282247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.572 [2024-07-13 08:20:38.282280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.572 [2024-07-13 08:20:38.282298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.572 [2024-07-13 08:20:38.282535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.572 [2024-07-13 08:20:38.282777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.572 [2024-07-13 08:20:38.282801] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.572 [2024-07-13 08:20:38.282817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.572 [2024-07-13 08:20:38.286377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.572 [2024-07-13 08:20:38.295819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.572 [2024-07-13 08:20:38.296233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.572 [2024-07-13 08:20:38.296266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.572 [2024-07-13 08:20:38.296284] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.572 [2024-07-13 08:20:38.296521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.572 [2024-07-13 08:20:38.296763] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.572 [2024-07-13 08:20:38.296788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.572 [2024-07-13 08:20:38.296802] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.572 [2024-07-13 08:20:38.300468] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.831 [2024-07-13 08:20:38.309731] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.831 [2024-07-13 08:20:38.310126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.831 [2024-07-13 08:20:38.310169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.831 [2024-07-13 08:20:38.310188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.831 [2024-07-13 08:20:38.310427] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.832 [2024-07-13 08:20:38.310670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.832 [2024-07-13 08:20:38.310695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.832 [2024-07-13 08:20:38.310712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.832 [2024-07-13 08:20:38.314275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.832 [2024-07-13 08:20:38.323716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.832 [2024-07-13 08:20:38.324103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.832 [2024-07-13 08:20:38.324135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.832 [2024-07-13 08:20:38.324154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.832 [2024-07-13 08:20:38.324391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.832 [2024-07-13 08:20:38.324633] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.832 [2024-07-13 08:20:38.324658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.832 [2024-07-13 08:20:38.324673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.832 [2024-07-13 08:20:38.328236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.832 [2024-07-13 08:20:38.337682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.832 [2024-07-13 08:20:38.338067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.832 [2024-07-13 08:20:38.338100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.832 [2024-07-13 08:20:38.338125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.832 [2024-07-13 08:20:38.338364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.832 [2024-07-13 08:20:38.338608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.832 [2024-07-13 08:20:38.338632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.832 [2024-07-13 08:20:38.338648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.832 [2024-07-13 08:20:38.342215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.832 [2024-07-13 08:20:38.351667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.832 [2024-07-13 08:20:38.352056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.832 [2024-07-13 08:20:38.352089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.832 [2024-07-13 08:20:38.352107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.832 [2024-07-13 08:20:38.352346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.832 [2024-07-13 08:20:38.352589] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.832 [2024-07-13 08:20:38.352613] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.832 [2024-07-13 08:20:38.352629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.832 [2024-07-13 08:20:38.356190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.832 [2024-07-13 08:20:38.365633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.832 [2024-07-13 08:20:38.366026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.832 [2024-07-13 08:20:38.366058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.832 [2024-07-13 08:20:38.366077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.832 [2024-07-13 08:20:38.366314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.832 [2024-07-13 08:20:38.366556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.832 [2024-07-13 08:20:38.366581] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.832 [2024-07-13 08:20:38.366596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.832 [2024-07-13 08:20:38.370161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.832 [2024-07-13 08:20:38.379612] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.832 [2024-07-13 08:20:38.380000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.832 [2024-07-13 08:20:38.380032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.832 [2024-07-13 08:20:38.380050] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.832 [2024-07-13 08:20:38.380287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.832 [2024-07-13 08:20:38.380528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.832 [2024-07-13 08:20:38.380558] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.832 [2024-07-13 08:20:38.380575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.832 [2024-07-13 08:20:38.384139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.832 [2024-07-13 08:20:38.393591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.832 [2024-07-13 08:20:38.394055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.832 [2024-07-13 08:20:38.394087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.832 [2024-07-13 08:20:38.394106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.832 [2024-07-13 08:20:38.394343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.832 [2024-07-13 08:20:38.394585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.832 [2024-07-13 08:20:38.394609] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.832 [2024-07-13 08:20:38.394625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.832 [2024-07-13 08:20:38.398191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.832 [2024-07-13 08:20:38.407426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.832 [2024-07-13 08:20:38.407844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.832 [2024-07-13 08:20:38.407883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.832 [2024-07-13 08:20:38.407903] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.832 [2024-07-13 08:20:38.408141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.832 [2024-07-13 08:20:38.408383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.832 [2024-07-13 08:20:38.408407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.832 [2024-07-13 08:20:38.408424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.832 [2024-07-13 08:20:38.411987] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.832 [2024-07-13 08:20:38.421418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.832 [2024-07-13 08:20:38.421884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.832 [2024-07-13 08:20:38.421916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.832 [2024-07-13 08:20:38.421935] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.832 [2024-07-13 08:20:38.422172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.832 [2024-07-13 08:20:38.422413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.832 [2024-07-13 08:20:38.422439] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.832 [2024-07-13 08:20:38.422455] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.832 [2024-07-13 08:20:38.426024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.832 [2024-07-13 08:20:38.435257] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.832 [2024-07-13 08:20:38.435729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.832 [2024-07-13 08:20:38.435779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.832 [2024-07-13 08:20:38.435798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.832 [2024-07-13 08:20:38.436047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.832 [2024-07-13 08:20:38.436290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.832 [2024-07-13 08:20:38.436314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.832 [2024-07-13 08:20:38.436331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.832 [2024-07-13 08:20:38.439890] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.832 [2024-07-13 08:20:38.449115] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.832 [2024-07-13 08:20:38.449527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.832 [2024-07-13 08:20:38.449559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.832 [2024-07-13 08:20:38.449577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.832 [2024-07-13 08:20:38.449814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.832 [2024-07-13 08:20:38.450069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.832 [2024-07-13 08:20:38.450095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.832 [2024-07-13 08:20:38.450111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.832 [2024-07-13 08:20:38.453663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.832 [2024-07-13 08:20:38.463109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.832 [2024-07-13 08:20:38.463516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.832 [2024-07-13 08:20:38.463550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.833 [2024-07-13 08:20:38.463569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.833 [2024-07-13 08:20:38.463807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.833 [2024-07-13 08:20:38.464064] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.833 [2024-07-13 08:20:38.464090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.833 [2024-07-13 08:20:38.464106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.833 [2024-07-13 08:20:38.467661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.833 [2024-07-13 08:20:38.477107] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.833 [2024-07-13 08:20:38.477528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.833 [2024-07-13 08:20:38.477560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.833 [2024-07-13 08:20:38.477579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.833 [2024-07-13 08:20:38.477821] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.833 [2024-07-13 08:20:38.478077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.833 [2024-07-13 08:20:38.478102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.833 [2024-07-13 08:20:38.478118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.833 [2024-07-13 08:20:38.481676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.833 [2024-07-13 08:20:38.491132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.833 [2024-07-13 08:20:38.491561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.833 [2024-07-13 08:20:38.491592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.833 [2024-07-13 08:20:38.491611] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.833 [2024-07-13 08:20:38.491848] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.833 [2024-07-13 08:20:38.492102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.833 [2024-07-13 08:20:38.492126] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.833 [2024-07-13 08:20:38.492142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.833 [2024-07-13 08:20:38.495698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.833 [2024-07-13 08:20:38.505250] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.833 [2024-07-13 08:20:38.505755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.833 [2024-07-13 08:20:38.505787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.833 [2024-07-13 08:20:38.505806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.833 [2024-07-13 08:20:38.506055] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.833 [2024-07-13 08:20:38.506298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.833 [2024-07-13 08:20:38.506323] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.833 [2024-07-13 08:20:38.506339] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.833 [2024-07-13 08:20:38.509904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.833 [2024-07-13 08:20:38.519138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.833 [2024-07-13 08:20:38.519568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.833 [2024-07-13 08:20:38.519618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.833 [2024-07-13 08:20:38.519637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.833 [2024-07-13 08:20:38.519884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.833 [2024-07-13 08:20:38.520127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.833 [2024-07-13 08:20:38.520151] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.833 [2024-07-13 08:20:38.520173] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.833 [2024-07-13 08:20:38.523738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.833 [2024-07-13 08:20:38.532991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.833 [2024-07-13 08:20:38.533403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.833 [2024-07-13 08:20:38.533435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.833 [2024-07-13 08:20:38.533453] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.833 [2024-07-13 08:20:38.533691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.833 [2024-07-13 08:20:38.533946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.833 [2024-07-13 08:20:38.533970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.833 [2024-07-13 08:20:38.533986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.833 [2024-07-13 08:20:38.537535] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.833 [2024-07-13 08:20:38.547001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.833 [2024-07-13 08:20:38.547431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.833 [2024-07-13 08:20:38.547464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.833 [2024-07-13 08:20:38.547482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.833 [2024-07-13 08:20:38.547720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.833 [2024-07-13 08:20:38.547973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.833 [2024-07-13 08:20:38.547998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.833 [2024-07-13 08:20:38.548014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:46.833 [2024-07-13 08:20:38.551565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:46.833 [2024-07-13 08:20:38.561097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:46.833 [2024-07-13 08:20:38.561533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:46.833 [2024-07-13 08:20:38.561566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:46.833 [2024-07-13 08:20:38.561585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:46.833 [2024-07-13 08:20:38.561822] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:46.833 [2024-07-13 08:20:38.562087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:46.833 [2024-07-13 08:20:38.562113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:46.833 [2024-07-13 08:20:38.562130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.093 [2024-07-13 08:20:38.565765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.093 [2024-07-13 08:20:38.575082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.093 [2024-07-13 08:20:38.575520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.093 [2024-07-13 08:20:38.575558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.093 [2024-07-13 08:20:38.575577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.093 [2024-07-13 08:20:38.575815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.093 [2024-07-13 08:20:38.576066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.093 [2024-07-13 08:20:38.576091] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.093 [2024-07-13 08:20:38.576107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.093 [2024-07-13 08:20:38.579660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.093 [2024-07-13 08:20:38.589097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.093 [2024-07-13 08:20:38.589564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.093 [2024-07-13 08:20:38.589613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.093 [2024-07-13 08:20:38.589632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.093 [2024-07-13 08:20:38.589877] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.093 [2024-07-13 08:20:38.590119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.093 [2024-07-13 08:20:38.590143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.093 [2024-07-13 08:20:38.590158] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.093 [2024-07-13 08:20:38.593710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.093 [2024-07-13 08:20:38.602963] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.093 [2024-07-13 08:20:38.603402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.093 [2024-07-13 08:20:38.603434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.093 [2024-07-13 08:20:38.603452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.093 [2024-07-13 08:20:38.603689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.093 [2024-07-13 08:20:38.603945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.093 [2024-07-13 08:20:38.603970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.093 [2024-07-13 08:20:38.603987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.093 [2024-07-13 08:20:38.607559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.093 [2024-07-13 08:20:38.616803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.093 [2024-07-13 08:20:38.617245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.093 [2024-07-13 08:20:38.617278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.093 [2024-07-13 08:20:38.617296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.093 [2024-07-13 08:20:38.617534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.093 [2024-07-13 08:20:38.617782] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.093 [2024-07-13 08:20:38.617807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.093 [2024-07-13 08:20:38.617824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.093 [2024-07-13 08:20:38.621388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.093 [2024-07-13 08:20:38.630629] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.093 [2024-07-13 08:20:38.631043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.093 [2024-07-13 08:20:38.631075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.093 [2024-07-13 08:20:38.631094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.093 [2024-07-13 08:20:38.631331] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.094 [2024-07-13 08:20:38.631572] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.094 [2024-07-13 08:20:38.631596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.094 [2024-07-13 08:20:38.631612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.094 [2024-07-13 08:20:38.635177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.094 [2024-07-13 08:20:38.644625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.094 [2024-07-13 08:20:38.645061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.094 [2024-07-13 08:20:38.645094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.094 [2024-07-13 08:20:38.645113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.094 [2024-07-13 08:20:38.645351] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.094 [2024-07-13 08:20:38.645594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.094 [2024-07-13 08:20:38.645618] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.094 [2024-07-13 08:20:38.645634] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.094 [2024-07-13 08:20:38.649198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.094 [2024-07-13 08:20:38.658632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.094 [2024-07-13 08:20:38.659068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.094 [2024-07-13 08:20:38.659101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.094 [2024-07-13 08:20:38.659119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.094 [2024-07-13 08:20:38.659356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.094 [2024-07-13 08:20:38.659598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.094 [2024-07-13 08:20:38.659623] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.094 [2024-07-13 08:20:38.659639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.094 [2024-07-13 08:20:38.663212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.094 [2024-07-13 08:20:38.672650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.094 [2024-07-13 08:20:38.673086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.094 [2024-07-13 08:20:38.673119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.094 [2024-07-13 08:20:38.673138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.094 [2024-07-13 08:20:38.673376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.094 [2024-07-13 08:20:38.673618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.094 [2024-07-13 08:20:38.673644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.094 [2024-07-13 08:20:38.673660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.094 [2024-07-13 08:20:38.677227] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.094 [2024-07-13 08:20:38.686801] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.094 [2024-07-13 08:20:38.687242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.094 [2024-07-13 08:20:38.687274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.094 [2024-07-13 08:20:38.687293] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.094 [2024-07-13 08:20:38.687531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.094 [2024-07-13 08:20:38.687772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.094 [2024-07-13 08:20:38.687797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.094 [2024-07-13 08:20:38.687813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.094 [2024-07-13 08:20:38.691378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.094 [2024-07-13 08:20:38.700815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.094 [2024-07-13 08:20:38.701275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.094 [2024-07-13 08:20:38.701307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.094 [2024-07-13 08:20:38.701326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.094 [2024-07-13 08:20:38.701563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.094 [2024-07-13 08:20:38.701804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.094 [2024-07-13 08:20:38.701829] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.094 [2024-07-13 08:20:38.701846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.094 [2024-07-13 08:20:38.705412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.094 [2024-07-13 08:20:38.714647] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.094 [2024-07-13 08:20:38.715084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.094 [2024-07-13 08:20:38.715117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.094 [2024-07-13 08:20:38.715141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.094 [2024-07-13 08:20:38.715380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.094 [2024-07-13 08:20:38.715624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.094 [2024-07-13 08:20:38.715649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.094 [2024-07-13 08:20:38.715665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.094 [2024-07-13 08:20:38.719230] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.094 [2024-07-13 08:20:38.728464] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.094 [2024-07-13 08:20:38.728910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.094 [2024-07-13 08:20:38.728943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.094 [2024-07-13 08:20:38.728961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.094 [2024-07-13 08:20:38.729199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.094 [2024-07-13 08:20:38.729440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.094 [2024-07-13 08:20:38.729464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.094 [2024-07-13 08:20:38.729480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.094 [2024-07-13 08:20:38.733045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.094 [2024-07-13 08:20:38.742497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.094 [2024-07-13 08:20:38.742904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.094 [2024-07-13 08:20:38.742936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.094 [2024-07-13 08:20:38.742954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.094 [2024-07-13 08:20:38.743192] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.094 [2024-07-13 08:20:38.743434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.094 [2024-07-13 08:20:38.743457] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.094 [2024-07-13 08:20:38.743473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.094 [2024-07-13 08:20:38.747047] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.094 [2024-07-13 08:20:38.756484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.094 [2024-07-13 08:20:38.756885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.094 [2024-07-13 08:20:38.756928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.094 [2024-07-13 08:20:38.756947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.094 [2024-07-13 08:20:38.757185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.094 [2024-07-13 08:20:38.757434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.094 [2024-07-13 08:20:38.757464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.094 [2024-07-13 08:20:38.757481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.094 [2024-07-13 08:20:38.761054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.094 [2024-07-13 08:20:38.770488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.094 [2024-07-13 08:20:38.770922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.094 [2024-07-13 08:20:38.770955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.094 [2024-07-13 08:20:38.770974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.094 [2024-07-13 08:20:38.771211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.094 [2024-07-13 08:20:38.771455] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.094 [2024-07-13 08:20:38.771479] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.094 [2024-07-13 08:20:38.771495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.094 [2024-07-13 08:20:38.775053] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2109915 Killed "${NVMF_APP[@]}" "$@" 00:33:47.094 08:20:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:33:47.094 08:20:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:33:47.094 08:20:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:47.095 08:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:47.095 08:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.095 [2024-07-13 08:20:38.784494] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.095 [2024-07-13 08:20:38.784926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.095 [2024-07-13 08:20:38.784958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.095 [2024-07-13 08:20:38.784976] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.095 08:20:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2110991 00:33:47.095 [2024-07-13 08:20:38.785213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.095 08:20:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2110991 00:33:47.095 08:20:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:33:47.095 [2024-07-13 08:20:38.785456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error 08:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 2110991 ']' 00:33:47.095 state 00:33:47.095 [2024-07-13 08:20:38.785484] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.095 [2024-07-13 08:20:38.785500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.095 08:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:47.095 08:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:47.095 08:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:47.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:47.095 08:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:47.095 08:20:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.095 [2024-07-13 08:20:38.789064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.095 [2024-07-13 08:20:38.798498] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.095 [2024-07-13 08:20:38.798900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.095 [2024-07-13 08:20:38.798932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.095 [2024-07-13 08:20:38.798951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.095 [2024-07-13 08:20:38.799188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.095 [2024-07-13 08:20:38.799430] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.095 [2024-07-13 08:20:38.799453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.095 [2024-07-13 08:20:38.799468] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.095 [2024-07-13 08:20:38.803031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.095 [2024-07-13 08:20:38.812466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.095 [2024-07-13 08:20:38.812895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.095 [2024-07-13 08:20:38.812926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.095 [2024-07-13 08:20:38.812944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.095 [2024-07-13 08:20:38.813181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.095 [2024-07-13 08:20:38.813422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.095 [2024-07-13 08:20:38.813445] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.095 [2024-07-13 08:20:38.813461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.095 [2024-07-13 08:20:38.817018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.357 [2024-07-13 08:20:38.826466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.357 [2024-07-13 08:20:38.826879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.357 [2024-07-13 08:20:38.826911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.357 [2024-07-13 08:20:38.826929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.357 [2024-07-13 08:20:38.827166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.357 [2024-07-13 08:20:38.827407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.357 [2024-07-13 08:20:38.827430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.357 [2024-07-13 08:20:38.827445] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.357 [2024-07-13 08:20:38.831092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.357 [2024-07-13 08:20:38.835572] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:33:47.357 [2024-07-13 08:20:38.835661] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:47.357 [2024-07-13 08:20:38.840317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.357 [2024-07-13 08:20:38.840722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.357 [2024-07-13 08:20:38.840764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.357 [2024-07-13 08:20:38.840781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.357 [2024-07-13 08:20:38.841034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.357 [2024-07-13 08:20:38.841255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.357 [2024-07-13 08:20:38.841275] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.357 [2024-07-13 08:20:38.841289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.357 [2024-07-13 08:20:38.844366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.357 [2024-07-13 08:20:38.853566] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.357 [2024-07-13 08:20:38.853954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.358 [2024-07-13 08:20:38.853983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.358 [2024-07-13 08:20:38.854000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.358 [2024-07-13 08:20:38.854232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.358 [2024-07-13 08:20:38.854454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.358 [2024-07-13 08:20:38.854473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.358 [2024-07-13 08:20:38.854486] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.358 [2024-07-13 08:20:38.857733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.358 [2024-07-13 08:20:38.866880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.358 [2024-07-13 08:20:38.867268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.358 [2024-07-13 08:20:38.867294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.358 [2024-07-13 08:20:38.867324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.358 [2024-07-13 08:20:38.867559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.358 [2024-07-13 08:20:38.867757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.358 [2024-07-13 08:20:38.867776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.358 [2024-07-13 08:20:38.867788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.358 [2024-07-13 08:20:38.870899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.358 EAL: No free 2048 kB hugepages reported on node 1 00:33:47.358 [2024-07-13 08:20:38.880404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.358 [2024-07-13 08:20:38.880811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.358 [2024-07-13 08:20:38.880839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.358 [2024-07-13 08:20:38.880856] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.358 [2024-07-13 08:20:38.881077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.358 [2024-07-13 08:20:38.881318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.358 [2024-07-13 08:20:38.881338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.358 [2024-07-13 08:20:38.881351] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.358 [2024-07-13 08:20:38.884559] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.358 [2024-07-13 08:20:38.893718] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.358 [2024-07-13 08:20:38.894095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.358 [2024-07-13 08:20:38.894123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.358 [2024-07-13 08:20:38.894138] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.358 [2024-07-13 08:20:38.894360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.358 [2024-07-13 08:20:38.894563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.358 [2024-07-13 08:20:38.894583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.358 [2024-07-13 08:20:38.894595] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.358 [2024-07-13 08:20:38.897617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.358 [2024-07-13 08:20:38.903956] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:47.358 [2024-07-13 08:20:38.907151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.358 [2024-07-13 08:20:38.907598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.358 [2024-07-13 08:20:38.907627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.358 [2024-07-13 08:20:38.907643] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.358 [2024-07-13 08:20:38.907885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.358 [2024-07-13 08:20:38.908098] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.358 [2024-07-13 08:20:38.908118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.358 [2024-07-13 08:20:38.908132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.358 [2024-07-13 08:20:38.911129] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.358 [2024-07-13 08:20:38.920488] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.358 [2024-07-13 08:20:38.921104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.358 [2024-07-13 08:20:38.921141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.358 [2024-07-13 08:20:38.921174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.358 [2024-07-13 08:20:38.921433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.358 [2024-07-13 08:20:38.921641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.358 [2024-07-13 08:20:38.921661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.358 [2024-07-13 08:20:38.921678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.358 [2024-07-13 08:20:38.924705] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.358 [2024-07-13 08:20:38.933822] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.358 [2024-07-13 08:20:38.934276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.358 [2024-07-13 08:20:38.934305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.358 [2024-07-13 08:20:38.934321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.358 [2024-07-13 08:20:38.934562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.358 [2024-07-13 08:20:38.934767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.358 [2024-07-13 08:20:38.934786] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.358 [2024-07-13 08:20:38.934800] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.358 [2024-07-13 08:20:38.937858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.358 [2024-07-13 08:20:38.947146] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.358 [2024-07-13 08:20:38.947607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.358 [2024-07-13 08:20:38.947637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.358 [2024-07-13 08:20:38.947656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.358 [2024-07-13 08:20:38.947909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.358 [2024-07-13 08:20:38.948115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.358 [2024-07-13 08:20:38.948135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.358 [2024-07-13 08:20:38.948149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.358 [2024-07-13 08:20:38.951262] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.358 [2024-07-13 08:20:38.960450] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.358 [2024-07-13 08:20:38.961045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.358 [2024-07-13 08:20:38.961083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.358 [2024-07-13 08:20:38.961103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.358 [2024-07-13 08:20:38.961353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.358 [2024-07-13 08:20:38.961563] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.358 [2024-07-13 08:20:38.961583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.358 [2024-07-13 08:20:38.961607] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.358 [2024-07-13 08:20:38.964627] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.358 [2024-07-13 08:20:38.973935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.358 [2024-07-13 08:20:38.974410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.358 [2024-07-13 08:20:38.974438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.358 [2024-07-13 08:20:38.974454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.358 [2024-07-13 08:20:38.974697] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.358 [2024-07-13 08:20:38.974930] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.358 [2024-07-13 08:20:38.974952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.358 [2024-07-13 08:20:38.974966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.358 [2024-07-13 08:20:38.978023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.358 [2024-07-13 08:20:38.987328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.358 [2024-07-13 08:20:38.987766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.358 [2024-07-13 08:20:38.987794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.358 [2024-07-13 08:20:38.987811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.358 [2024-07-13 08:20:38.988034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.358 [2024-07-13 08:20:38.988280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.358 [2024-07-13 08:20:38.988300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.358 [2024-07-13 08:20:38.988314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.358 [2024-07-13 08:20:38.989860] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:47.359 [2024-07-13 08:20:38.989912] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:47.359 [2024-07-13 08:20:38.989926] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:47.359 [2024-07-13 08:20:38.989952] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:47.359 [2024-07-13 08:20:38.989962] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:47.359 [2024-07-13 08:20:38.990167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:33:47.359 [2024-07-13 08:20:38.990228] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:33:47.359 [2024-07-13 08:20:38.990231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:47.359 [2024-07-13 08:20:38.991484] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.359 [2024-07-13 08:20:39.000913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.359 [2024-07-13 08:20:39.001430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.359 [2024-07-13 08:20:39.001469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.359 [2024-07-13 08:20:39.001490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.359 [2024-07-13 08:20:39.001722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.359 [2024-07-13 08:20:39.001954] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.359 [2024-07-13 08:20:39.001977] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.359 [2024-07-13 08:20:39.001994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.359 [2024-07-13 08:20:39.005254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.359 [2024-07-13 08:20:39.014497] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.359 [2024-07-13 08:20:39.015040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.359 [2024-07-13 08:20:39.015079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.359 [2024-07-13 08:20:39.015099] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.359 [2024-07-13 08:20:39.015321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.359 [2024-07-13 08:20:39.015543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.359 [2024-07-13 08:20:39.015564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.359 [2024-07-13 08:20:39.015582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.359 [2024-07-13 08:20:39.018811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.359 [2024-07-13 08:20:39.028227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.359 [2024-07-13 08:20:39.028844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.359 [2024-07-13 08:20:39.028890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.359 [2024-07-13 08:20:39.028914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.359 [2024-07-13 08:20:39.029139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.359 [2024-07-13 08:20:39.029362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.359 [2024-07-13 08:20:39.029383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.359 [2024-07-13 08:20:39.029401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.359 [2024-07-13 08:20:39.032658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.359 [2024-07-13 08:20:39.041836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.359 [2024-07-13 08:20:39.042430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.359 [2024-07-13 08:20:39.042467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.359 [2024-07-13 08:20:39.042488] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.359 [2024-07-13 08:20:39.042713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.359 [2024-07-13 08:20:39.042946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.359 [2024-07-13 08:20:39.042969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.359 [2024-07-13 08:20:39.042998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.359 [2024-07-13 08:20:39.046249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.359 [2024-07-13 08:20:39.055490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.359 [2024-07-13 08:20:39.056044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.359 [2024-07-13 08:20:39.056081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.359 [2024-07-13 08:20:39.056101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.359 [2024-07-13 08:20:39.056325] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.359 [2024-07-13 08:20:39.056548] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.359 [2024-07-13 08:20:39.056569] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.359 [2024-07-13 08:20:39.056587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.359 [2024-07-13 08:20:39.059808] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.359 [2024-07-13 08:20:39.069239] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.359 [2024-07-13 08:20:39.069762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.359 [2024-07-13 08:20:39.069802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.359 [2024-07-13 08:20:39.069823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.359 [2024-07-13 08:20:39.070057] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.359 [2024-07-13 08:20:39.070282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.359 [2024-07-13 08:20:39.070304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.359 [2024-07-13 08:20:39.070323] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.359 [2024-07-13 08:20:39.073573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.359 [2024-07-13 08:20:39.083019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.359 [2024-07-13 08:20:39.083433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.359 [2024-07-13 08:20:39.083461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.359 [2024-07-13 08:20:39.083478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.359 [2024-07-13 08:20:39.083694] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.359 [2024-07-13 08:20:39.083921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.359 [2024-07-13 08:20:39.083942] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.359 [2024-07-13 08:20:39.083957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.359 [2024-07-13 08:20:39.087289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.618 [2024-07-13 08:20:39.096787] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.618 [2024-07-13 08:20:39.097196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.618 [2024-07-13 08:20:39.097226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.618 [2024-07-13 08:20:39.097243] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.618 [2024-07-13 08:20:39.097459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.618 [2024-07-13 08:20:39.097676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.618 [2024-07-13 08:20:39.097697] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.618 [2024-07-13 08:20:39.097711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.618 [2024-07-13 08:20:39.100923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.618 08:20:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:47.618 08:20:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:33:47.618 08:20:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:47.618 08:20:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:47.618 08:20:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.618 [2024-07-13 08:20:39.110374] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.618 [2024-07-13 08:20:39.110769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.618 [2024-07-13 08:20:39.110798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.618 [2024-07-13 08:20:39.110816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.618 [2024-07-13 08:20:39.111039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.618 [2024-07-13 08:20:39.111258] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.618 [2024-07-13 08:20:39.111280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.618 [2024-07-13 08:20:39.111294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.618 [2024-07-13 08:20:39.114551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.618 [2024-07-13 08:20:39.123921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.618 [2024-07-13 08:20:39.124310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.618 [2024-07-13 08:20:39.124338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.618 [2024-07-13 08:20:39.124354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.618 [2024-07-13 08:20:39.124567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.618 [2024-07-13 08:20:39.124784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.618 [2024-07-13 08:20:39.124805] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.618 [2024-07-13 08:20:39.124819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.618 [2024-07-13 08:20:39.128051] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.618 08:20:39 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:47.619 08:20:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:47.619 08:20:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.619 08:20:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.619 [2024-07-13 08:20:39.137405] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.619 [2024-07-13 08:20:39.137798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.619 [2024-07-13 08:20:39.137826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.619 [2024-07-13 08:20:39.137842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.619 [2024-07-13 08:20:39.137935] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:47.619 [2024-07-13 08:20:39.138063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.619 [2024-07-13 08:20:39.138281] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.619 [2024-07-13 08:20:39.138302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.619 [2024-07-13 08:20:39.138315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.619 [2024-07-13 08:20:39.141579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.619 08:20:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.619 08:20:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:47.619 08:20:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.619 08:20:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.619 [2024-07-13 08:20:39.150962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.619 [2024-07-13 08:20:39.151364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.619 [2024-07-13 08:20:39.151392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.619 [2024-07-13 08:20:39.151408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.619 [2024-07-13 08:20:39.151637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.619 [2024-07-13 08:20:39.151864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.619 [2024-07-13 08:20:39.151894] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.619 [2024-07-13 08:20:39.151908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.619 [2024-07-13 08:20:39.155133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.619 [2024-07-13 08:20:39.164557] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.619 [2024-07-13 08:20:39.165127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.619 [2024-07-13 08:20:39.165167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.619 [2024-07-13 08:20:39.165188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.619 [2024-07-13 08:20:39.165413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.619 [2024-07-13 08:20:39.165637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.619 [2024-07-13 08:20:39.165658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.619 [2024-07-13 08:20:39.165687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.619 [2024-07-13 08:20:39.168924] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.619 Malloc0 00:33:47.619 08:20:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.619 08:20:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:47.619 08:20:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.619 08:20:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.619 [2024-07-13 08:20:39.178266] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.619 [2024-07-13 08:20:39.178787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.619 [2024-07-13 08:20:39.178821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.619 [2024-07-13 08:20:39.178841] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.619 [2024-07-13 08:20:39.179072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.619 [2024-07-13 08:20:39.179294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.619 [2024-07-13 08:20:39.179315] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.619 [2024-07-13 08:20:39.179332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.619 [2024-07-13 08:20:39.182584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.619 08:20:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.619 08:20:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:47.619 08:20:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.619 08:20:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.619 [2024-07-13 08:20:39.191917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.619 08:20:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.619 [2024-07-13 08:20:39.192293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:47.619 [2024-07-13 08:20:39.192321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2002f70 with addr=10.0.0.2, port=4420 00:33:47.619 [2024-07-13 08:20:39.192337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2002f70 is same with the state(5) to be set 00:33:47.619 08:20:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:47.619 08:20:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:47.619 [2024-07-13 08:20:39.192551] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2002f70 (9): Bad file descriptor 00:33:47.619 08:20:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:47.619 [2024-07-13 08:20:39.192770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:33:47.619 [2024-07-13 08:20:39.192790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:33:47.619 [2024-07-13 08:20:39.192804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:33:47.619 [2024-07-13 08:20:39.196026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:33:47.619 [2024-07-13 08:20:39.196264] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:47.619 08:20:39 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:47.619 08:20:39 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2110321 00:33:47.619 [2024-07-13 08:20:39.205408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:33:47.619 [2024-07-13 08:20:39.233741] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:33:57.591 00:33:57.591 Latency(us) 00:33:57.591 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:57.591 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:57.591 Verification LBA range: start 0x0 length 0x4000 00:33:57.591 Nvme1n1 : 15.01 6672.96 26.07 8551.32 0.00 8382.76 831.34 25631.86 00:33:57.591 =================================================================================================================== 00:33:57.591 Total : 6672.96 26.07 8551.32 0.00 8382.76 831.34 25631.86 00:33:57.591 08:20:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:57.592 rmmod nvme_tcp 00:33:57.592 rmmod nvme_fabrics 00:33:57.592 rmmod nvme_keyring 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2110991 ']' 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2110991 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 2110991 ']' 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 2110991 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2110991 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2110991' 00:33:57.592 killing process with pid 2110991 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 2110991 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 2110991 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:57.592 08:20:48 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.491 08:20:50 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:59.491 00:33:59.491 real 0m22.996s 00:33:59.491 user 1m2.140s 00:33:59.491 sys 0m4.220s 00:33:59.491 08:20:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:59.491 08:20:50 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:33:59.491 ************************************ 00:33:59.491 END TEST nvmf_bdevperf 00:33:59.491 ************************************ 00:33:59.491 08:20:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:59.491 08:20:50 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:59.491 08:20:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:59.491 08:20:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:59.491 08:20:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:59.491 ************************************ 00:33:59.491 START TEST nvmf_target_disconnect 00:33:59.491 ************************************ 00:33:59.491 08:20:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:33:59.491 * Looking for test storage... 00:33:59.491 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:59.491 08:20:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:59.491 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:33:59.491 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:59.491 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:59.491 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:59.491 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:59.491 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:59.491 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:59.491 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:59.491 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:59.491 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:59.491 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:59.491 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:59.491 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:59.491 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:59.491 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:59.491 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:59.491 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:59.491 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:33:59.492 08:20:50 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:01.392 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:01.392 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:01.392 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:01.393 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:01.393 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:01.393 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:01.393 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:34:01.393 00:34:01.393 --- 10.0.0.2 ping statistics --- 00:34:01.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:01.393 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:01.393 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:01.393 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:34:01.393 00:34:01.393 --- 10.0.0.1 ping statistics --- 00:34:01.393 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:01.393 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:01.393 ************************************ 00:34:01.393 START TEST nvmf_target_disconnect_tc1 00:34:01.393 ************************************ 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:01.393 08:20:52 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:01.393 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.393 [2024-07-13 08:20:53.026241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:01.393 [2024-07-13 08:20:53.026309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc88590 with addr=10.0.0.2, port=4420 00:34:01.393 [2024-07-13 08:20:53.026343] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:01.393 [2024-07-13 08:20:53.026363] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:01.393 [2024-07-13 08:20:53.026376] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:01.393 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:01.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:01.393 Initializing NVMe Controllers 00:34:01.393 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:01.393 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:01.393 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:01.393 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:01.393 00:34:01.393 real 0m0.093s 00:34:01.393 user 0m0.044s 00:34:01.393 sys 0m0.048s 00:34:01.393 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:01.393 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:01.393 ************************************ 00:34:01.393 END TEST nvmf_target_disconnect_tc1 00:34:01.393 ************************************ 00:34:01.393 08:20:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:01.393 08:20:53 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:01.393 08:20:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:01.393 08:20:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:01.393 08:20:53 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:01.393 ************************************ 00:34:01.393 START TEST nvmf_target_disconnect_tc2 00:34:01.393 ************************************ 00:34:01.393 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:34:01.393 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:01.394 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:01.394 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:01.394 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:01.394 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:01.394 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2114022 00:34:01.394 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:01.394 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2114022 00:34:01.394 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2114022 ']' 00:34:01.394 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:01.394 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:01.394 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:01.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:01.394 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:01.394 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:01.652 [2024-07-13 08:20:53.137354] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:01.652 [2024-07-13 08:20:53.137425] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:01.652 EAL: No free 2048 kB hugepages reported on node 1 00:34:01.652 [2024-07-13 08:20:53.208072] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:01.652 [2024-07-13 08:20:53.293655] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:01.652 [2024-07-13 08:20:53.293712] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:01.652 [2024-07-13 08:20:53.293726] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:01.652 [2024-07-13 08:20:53.293736] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:01.652 [2024-07-13 08:20:53.293746] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:01.652 [2024-07-13 08:20:53.293827] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:01.652 [2024-07-13 08:20:53.293956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:01.652 [2024-07-13 08:20:53.294020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:01.652 [2024-07-13 08:20:53.294022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:01.909 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:01.909 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:01.909 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:01.909 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:01.909 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:01.909 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:01.909 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:01.909 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.909 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:01.909 Malloc0 00:34:01.909 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.909 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:01.909 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.909 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:01.909 [2024-07-13 08:20:53.470500] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:01.909 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.909 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:01.909 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.909 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:01.910 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.910 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:01.910 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.910 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:01.910 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.910 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:01.910 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.910 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:01.910 [2024-07-13 08:20:53.498748] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:01.910 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.910 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:01.910 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:01.910 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:01.910 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:01.910 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2114160 00:34:01.910 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:01.910 08:20:53 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:01.910 EAL: No free 2048 kB hugepages reported on node 1 00:34:03.813 08:20:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2114022 00:34:03.813 08:20:55 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:03.813 Read completed with error (sct=0, sc=8) 00:34:03.813 starting I/O failed 00:34:03.813 Read completed with error (sct=0, sc=8) 00:34:03.813 starting I/O failed 00:34:03.813 Read completed with error (sct=0, sc=8) 00:34:03.813 starting I/O failed 00:34:03.813 Read completed with error (sct=0, sc=8) 00:34:03.813 starting I/O failed 00:34:03.813 Read completed with error (sct=0, sc=8) 00:34:03.813 starting I/O failed 00:34:03.813 Read completed with error (sct=0, sc=8) 00:34:03.813 starting I/O failed 00:34:03.813 Read completed with error (sct=0, sc=8) 00:34:03.813 starting I/O failed 00:34:03.813 Read completed with error (sct=0, sc=8) 00:34:03.813 starting I/O failed 00:34:03.813 Read completed with error (sct=0, sc=8) 00:34:03.813 starting I/O failed 00:34:03.813 Read completed with error (sct=0, sc=8) 00:34:03.813 starting I/O failed 00:34:03.813 Read completed with error (sct=0, sc=8) 00:34:03.813 starting I/O failed 00:34:03.813 Read completed with error (sct=0, sc=8) 00:34:03.813 starting I/O failed 00:34:03.813 Read completed with error (sct=0, sc=8) 00:34:03.813 starting I/O failed 00:34:03.813 Write completed with error (sct=0, sc=8) 00:34:03.813 starting I/O failed 00:34:03.813 Write completed with error (sct=0, sc=8) 00:34:03.813 starting I/O failed 00:34:03.813 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 [2024-07-13 08:20:55.526932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 [2024-07-13 08:20:55.527253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 [2024-07-13 08:20:55.527548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Read completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 Write completed with error (sct=0, sc=8) 00:34:03.814 starting I/O failed 00:34:03.814 [2024-07-13 08:20:55.527844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:03.814 [2024-07-13 08:20:55.528031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.814 [2024-07-13 08:20:55.528065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.814 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.528208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.528234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.528402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.528428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.528561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.528602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.528790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.528817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.528979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.529005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.529139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.529178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.529385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.529416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.529623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.529665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.529825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.529853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.530003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.530031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.530207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.530235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.530444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.530477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.530655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.530683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.530908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.530940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.531096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.531123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.531301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.531329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.531460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.531486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.531667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.531719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.531899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.531932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.532069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.532096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.532250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.532277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.532430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.532457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.532633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.532661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.532788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.532814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.532971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.532998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.533133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.533164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.533370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.533398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.533578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.533605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.533756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.533802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.533986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.534015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.534150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.534178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.534329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.534372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.534569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.534618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.534770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.534797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.534932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.534958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.535085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.535112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.535279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.535322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.535533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.535561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.535953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.535994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.536153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.536183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.815 qpair failed and we were unable to recover it. 00:34:03.815 [2024-07-13 08:20:55.536374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.815 [2024-07-13 08:20:55.536421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.536573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.536620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.536821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.536849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.536986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.537014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.537149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.537185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.537397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.537425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.537558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.537584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.537778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.537807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.537944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.537972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.538099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.538127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.538312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.538354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.538565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.538603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.538882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.538920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.539052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.539080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.539236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.539264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.539395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.539423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.539588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.539618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.539748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.539775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.539922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.539964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.540104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.540144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.540312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.540341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.540499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.540526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.540650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.540676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.540851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.540884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.541003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.541030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.541169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.541197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.541357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.541384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.541536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.541564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.541702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.541732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.541892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.541924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.542053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.542082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.542256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.542314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.542596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.542651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.542832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.542861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.543015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.543044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.543192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.543220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.543421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.543466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.543701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.543728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.543913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.543959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.544099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.544146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:03.816 [2024-07-13 08:20:55.544348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:03.816 [2024-07-13 08:20:55.544377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:03.816 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.544534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.544562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.544691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.544726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.544883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.544930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.545066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.545093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.545304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.545333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.546055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.546084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.546293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.546319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.546435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.546461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.546579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.546604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.546779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.546808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.546991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.547019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.547153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.547178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.547320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.547347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.547525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.547552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.547701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.547727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.547880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.547922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.548055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.548082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.548242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.548270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.548419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.548446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.548638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.548665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.548795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.548821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.548984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.094 [2024-07-13 08:20:55.549013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.094 qpair failed and we were unable to recover it. 00:34:04.094 [2024-07-13 08:20:55.549141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.549167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.549287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.549314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.549459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.549490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.549639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.549665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.549818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.549845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.549983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.550021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.550179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.550208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.550365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.550393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.550608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.550635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.550770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.550799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.550958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.550988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.551169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.551197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.551325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.551350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.551487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.551514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.551640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.551665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.551818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.551845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.551986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.552012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.552134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.552159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.552284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.552311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.552488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.552515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.552677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.552704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.552853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.552889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.553045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.553072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.553190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.553216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.553358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.553403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.553586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.553631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.553813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.553844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.553974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.554000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.554174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.554201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.554331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.554357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.554511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.554537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.554689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.554731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.554926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.554954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.555073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.555099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.555251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.555278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.555428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.555455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.555601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.555636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.555813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.555860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.556024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.556056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.556264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.556295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.095 [2024-07-13 08:20:55.556442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.095 [2024-07-13 08:20:55.556470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.095 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.556647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.556675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.556801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.556828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.557027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.557056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.557208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.557236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.557442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.557469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.557594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.557619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.557791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.557818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.557961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.557986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.558124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.558151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.558271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.558296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.558442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.558469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.558612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.558639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.558804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.558845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.559010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.559039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.559244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.559289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.559475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.559511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.559670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.559699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.559876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.559905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.560058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.560086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.560236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.560264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.560466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.560511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.560802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.560855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.561045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.561073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.561290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.561317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.561494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.561522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.561675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.561703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.561881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.561909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.562090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.562117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.562320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.562364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.562492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.562519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.562695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.562722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.562902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.562930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.563108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.563135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.563287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.563315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.563465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.563492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.563621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.563647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.563827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.563855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.564062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.564093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.564287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.564316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.564442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.564470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.096 qpair failed and we were unable to recover it. 00:34:04.096 [2024-07-13 08:20:55.564651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.096 [2024-07-13 08:20:55.564679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.564829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.564857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.565020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.565047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.565201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.565228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.565438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.565468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.565755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.565797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.565990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.566019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.566190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.566220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.566396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.566423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.566598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.566626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.566778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.566806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.566923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.566950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.567072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.567097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.567218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.567243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.567418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.567445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.567572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.567602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.567782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.567812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.567980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.568007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.568183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.568210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.568368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.568394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.568528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.568555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.568706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.568733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.568854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.568887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.569059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.569087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.569233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.569259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.569377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.569402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.569577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.569603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.569733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.569758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.569984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.570011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.570171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.570198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.570321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.570347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.570478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.570504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.570656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.570682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.570882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.570926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.571079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.571106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.571257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.571284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.571423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.571449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.571653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.571680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.571831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.571858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.572042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.572069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.572190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.097 [2024-07-13 08:20:55.572215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.097 qpair failed and we were unable to recover it. 00:34:04.097 [2024-07-13 08:20:55.572370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.572397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.572548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.572578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.572783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.572810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.572965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.572992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.573142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.573169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.573329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.573356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.573557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.573586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.573744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.573774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.573943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.573968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.574134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.574175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.574382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.574428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.574596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.574640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.574787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.574814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.574996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.575025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.575182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.575211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.575432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.575469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.575678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.575714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.575932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.575959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.576135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.576162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.576340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.576367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.576496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.576521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.576697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.576724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.576846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.576885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.577034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.577061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.577256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.577283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.577436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.577463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.577609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.577636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.577787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.577814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.577990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.578017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.578155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.578183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.578334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.578361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.578553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.578580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.098 [2024-07-13 08:20:55.578714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.098 [2024-07-13 08:20:55.578740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.098 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.578906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.578935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.579056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.579082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.579254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.579281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.579460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.579486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.579642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.579672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.579862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.579899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.580033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.580059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.580233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.580259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.580454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.580483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.580673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.580707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.580881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.580908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.581058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.581085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.581261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.581288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.581477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.581504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.581625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.581650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.581843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.581893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.582058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.582087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.582261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.582305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.582456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.582483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.582662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.582689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.582841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.582876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.583033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.583079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.583252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.583297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.583496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.583523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.583675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.583704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.583850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.583902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.584043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.584073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.584228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.584255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.584432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.584474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.584637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.584667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.584840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.584875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.585024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.585052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.585221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.585252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.585392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.585421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.585601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.585629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.585757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.585783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.585966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.585999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.586148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.586175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.586295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.586320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.586497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.586524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.099 qpair failed and we were unable to recover it. 00:34:04.099 [2024-07-13 08:20:55.586639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.099 [2024-07-13 08:20:55.586664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.586839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.586873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.587050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.587080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.587237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.587266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.587409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.587439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.587639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.587666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.587818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.587845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.587978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.588005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.588182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.588225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.588411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.588440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.588582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.588612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.588805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.588831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.588978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.589004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.589153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.589180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.589294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.589320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.589479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.589506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.589657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.589684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.589837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.589864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.590029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.590055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.590182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.590207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.590384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.590412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.590558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.590584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.590736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.590763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.590912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.590944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.591125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.591152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.591273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.591298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.591487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.591514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.591664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.591691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.591849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.591888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.592022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.592048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.592200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.592228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.592360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.592393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.592597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.592624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.592801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.592828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.592986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.593015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.593141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.593184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.593383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.593410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.593545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.593571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.593751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.593781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.593931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.593957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.100 [2024-07-13 08:20:55.594101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.100 [2024-07-13 08:20:55.594145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.100 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.594319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.594346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.594495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.594522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.594697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.594728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.594925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.594966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.595173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.595218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.595436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.595464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.595617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.595645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.595822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.595850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.595985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.596011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.596165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.596210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.596391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.596435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.596618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.596645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.596778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.596804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.596937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.596965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.597152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.597180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.597329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.597357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.597506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.597534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.597693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.597721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.597908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.597937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.598091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.598119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.598299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.598344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.598490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.598518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.598643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.598669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.598825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.598853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.599014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.599042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.599217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.599262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.599410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.599455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.599604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.599631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.599786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.599814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.599954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.599981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.600136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.600166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.600357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.600400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.600594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.600623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.600764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.600791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.600919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.600945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.601095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.601122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.601248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.601278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.601469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.601496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.601646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.601672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.601847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.601880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.101 [2024-07-13 08:20:55.602030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.101 [2024-07-13 08:20:55.602056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.101 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.602232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.602261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.602480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.602510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.602678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.602705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.602849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.602885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.603007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.603032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.603180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.603207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.603386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.603413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.603544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.603569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.603748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.603775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.603925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.603951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.604073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.604100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.604254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.604281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.604425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.604452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.604595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.604621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.604775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.604802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.604945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.604971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.605148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.605175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.605327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.605354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.605528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.605555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.605677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.605703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.605827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.605852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.606011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.606038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.606217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.606243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.606370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.606412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.606624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.606666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.606854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.606887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.607030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.607055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.607199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.607226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.607401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.607428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.607569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.607595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.607723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.607748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.607897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.607925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.608044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.608070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.608219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.608251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.608440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.608466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.608619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.608646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.608802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.608835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.609020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.609047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.609236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.609266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.609434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.609463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.609655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.102 [2024-07-13 08:20:55.609684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.102 qpair failed and we were unable to recover it. 00:34:04.102 [2024-07-13 08:20:55.609847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.609884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.610056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.610083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.610237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.610262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.610412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.610437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.610587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.610612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.610822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.610847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.611004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.611029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.611148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.611173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.611305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.611329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.611523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.611548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.611690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.611715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.611862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.611894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.612070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.612096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.612278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.612303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.612517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.612543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.612686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.612711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.612855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.612900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.613038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.613064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.613240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.613266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.613451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.613476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.613595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.613620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.613807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.613834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.613995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.614025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.614185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.614210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.614392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.614417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.614568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.614610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.614764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.614792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.614930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.614955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.615134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.615159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.615338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.615366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.615531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.615556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.615733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.615758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.615906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.615942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.616095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.616121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.616270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.616295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.616448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.103 [2024-07-13 08:20:55.616473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.103 qpair failed and we were unable to recover it. 00:34:04.103 [2024-07-13 08:20:55.616598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.616623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.616769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.616793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.616967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.616992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.617139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.617165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.617354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.617379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.617531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.617556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.617699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.617724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.617876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.617901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.618084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.618110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.618252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.618277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.618426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.618450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.618593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.618618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.618787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.618814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.618972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.618997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.619173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.619198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.619376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.619403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.619603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.619628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.619780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.619804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.619949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.619974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.620126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.620169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.620308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.620333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.620467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.620507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.620702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.620730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.620893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.620919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.621070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.621096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.621251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.621275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.621435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.621461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.621592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.621621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.621747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.621772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.621928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.621955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.622086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.622112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.622254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.622279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.622398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.622425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.622572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.622598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.622752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.622778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.622905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.622931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.623078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.623103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.623281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.623306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.623426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.623451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.623602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.623626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.623807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.623834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.104 qpair failed and we were unable to recover it. 00:34:04.104 [2024-07-13 08:20:55.624001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.104 [2024-07-13 08:20:55.624027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.624160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.624184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.624396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.624421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.624573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.624598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.624723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.624748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.624904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.624930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.625051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.625077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.625223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.625249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.625403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.625430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.625574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.625600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.625752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.625778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.625923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.625950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.626148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.626174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.626320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.626350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.626501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.626528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.626679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.626706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.626854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.626889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.627041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.627069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.627205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.627232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.627354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.627380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.627533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.627575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.627740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.627766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.627902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.627930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.628085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.628112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.628265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.628291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.628411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.628437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.628606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.628637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.628811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.628838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.629021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.629048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.629192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.629235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.629440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.629467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.629614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.629640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.629782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.629808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.629958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.629985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.630140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.630167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.630317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.630361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.630527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.630554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.630686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.630712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.630871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.630899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.631056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.631083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.631248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.631277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.105 [2024-07-13 08:20:55.631430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.105 [2024-07-13 08:20:55.631457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.105 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.631634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.631661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.631833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.631861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.632007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.632037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.632245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.632272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.632422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.632448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.632599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.632627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.632784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.632811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.632953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.632981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.633108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.633135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.633288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.633315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.633433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.633459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.633636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.633680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.633850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.633888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.634035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.634079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.634273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.634300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.634412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.634439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.634613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.634655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.634855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.634888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.635042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.635069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.635211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.635237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.635412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.635441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.635612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.635638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.635811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.635840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.635984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.636011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.636185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.636211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.636334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.636360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.636559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.636586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.636736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.636762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.636892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.636921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.637074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.637101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.637278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.637305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.637473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.637502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.637684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.637710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.637835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.637861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.637992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.638019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.638198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.638228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.638369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.638397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.638580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.638607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.638728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.638754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.638946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.638977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.639128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.106 [2024-07-13 08:20:55.639173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.106 qpair failed and we were unable to recover it. 00:34:04.106 [2024-07-13 08:20:55.639376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.639403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.639550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.639577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.639748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.639777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.639918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.639946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.640098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.640125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.640276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.640303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.640453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.640479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.640600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.640626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.640774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.640801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.640951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.640981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.641125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.641151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.641293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.641337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.641545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.641572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.641718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.641745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.641892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.641919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.642035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.642061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.642246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.642272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.642417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.642443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.642631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.642657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.642798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.642825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.642990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.643017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.643145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.643189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.643384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.643412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.643573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.643602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.643771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.643800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.643960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.643986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.644122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.644167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.644363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.644388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.644565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.644591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.644771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.644800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.644953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.644980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.645110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.645136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.645287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.645330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.645478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.645504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.645679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.645705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.645830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.107 [2024-07-13 08:20:55.645857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.107 qpair failed and we were unable to recover it. 00:34:04.107 [2024-07-13 08:20:55.645993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.646019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.646173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.646200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.646349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.646375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.646551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.646587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.646777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.646803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.646983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.647010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.647218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.647245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.647392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.647418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.647583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.647613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.647811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.647838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.647995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.648023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.648188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.648214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.648389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.648415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.648565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.648592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.648709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.648736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.648907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.648937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.649079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.649105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.649286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.649313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.649495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.649524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.649667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.649694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.649819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.649844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.650029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.650056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.650208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.650234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.650433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.650461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.650616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.650645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.650813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.650839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.651018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.651045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.651216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.651243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.651405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.651431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.651628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.651656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.651823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.651853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.652078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.652105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.652279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.652309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.652481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.652511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.652688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.652714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.652882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.652913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.653079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.653106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.653280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.653324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.653483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.653509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.653715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.653744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.108 [2024-07-13 08:20:55.653909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.108 [2024-07-13 08:20:55.653938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.108 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.654107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.654136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.654302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.654329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.654526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.654597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.654803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.654830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.655003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.655032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.655225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.655252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.655415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.655445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.655636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.655664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.655806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.655833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.655972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.656001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.656152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.656194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.656385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.656414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.656543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.656572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.656723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.656751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.656908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.656936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.657106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.657133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.657296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.657323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.657542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.657568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.657747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.657774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.657929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.657972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.658135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.658164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.658333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.658360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.658536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.658601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.658777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.658803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.658981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.659008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.659164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.659191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.659315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.659348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.659498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.659524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.659733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.659762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.659960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.659987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.660155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.660188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.660344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.660373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.660531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.660560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.660725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.660752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.660924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.660954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.661156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.661183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.661325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.661351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.661493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.661519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.661688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.661718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.661860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.661897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.662089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.109 [2024-07-13 08:20:55.662119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.109 qpair failed and we were unable to recover it. 00:34:04.109 [2024-07-13 08:20:55.662259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.662286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.662436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.662479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.662716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.662745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.662917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.662946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.663095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.663122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.663252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.663279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.663447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.663475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.663667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.663697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.663863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.663895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.664030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.664057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.664245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.664272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.664465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.664494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.664665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.664691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.664959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.665018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.665210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.665240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.665481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.665510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.665716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.665742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.665925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.665954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.666145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.666174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.666313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.666342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.666574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.666599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.666763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.666792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.666960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.666989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.667132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.667161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.667298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.667324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.667541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.667596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.667759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.667789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.667931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.667961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.668129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.668155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.668334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.668393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.668569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.668599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.668789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.668818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.669001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.669028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.669197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.669225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.669412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.669441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.669608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.669637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.669774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.669800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.669977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.670022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.670155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.670184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.670360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.670388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.670581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.670607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.110 qpair failed and we were unable to recover it. 00:34:04.110 [2024-07-13 08:20:55.670776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.110 [2024-07-13 08:20:55.670805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.670998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.671027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.671228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.671257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.671433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.671459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.671660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.671689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.671838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.671873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.672069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.672095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.672240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.672266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.672478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.672529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.672706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.672735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.672925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.672955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.673112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.673138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.673309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.673375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.673567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.673593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.673739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.673782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.673924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.673951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.674084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.674114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.674255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.674284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.674441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.674469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.674639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.674666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.674812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.674854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.675016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.675045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.675237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.675266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.675468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.675495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.675689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.675718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.675890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.675921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.676093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.676119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.676268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.676294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.676443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.676486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.676642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.676671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.676841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.676878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.677052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.677077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.677233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.677281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.677471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.677500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.677688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.677716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.677858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.677893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.111 [2024-07-13 08:20:55.678042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.111 [2024-07-13 08:20:55.678086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.111 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.678247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.678276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.678418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.678446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.678617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.678643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.678812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.678841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.679041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.679083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.679268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.679299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.679453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.679481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.679679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.679732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.679881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.679934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.680051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.680078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.680228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.680255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.680386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.680412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.680579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.680623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.680788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.680817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.680999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.681026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.681225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.681294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.681575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.681627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.681829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.681855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.682014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.682040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.682291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.682342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.682652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.682713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.682890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.682935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.683084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.683110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.683232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.683276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.683432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.683461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.683627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.683656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.683833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.683859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.684029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.684055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.684187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.684213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.684385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.684413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.684586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.684612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.684765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.684791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.684959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.684986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.685149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.685175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.685354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.685380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.685658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.685709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.685890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.685920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.686089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.686116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.686270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.686296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.686469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.686495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.112 [2024-07-13 08:20:55.686691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.112 [2024-07-13 08:20:55.686720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.112 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.686932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.686958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.687078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.687104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.687256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.687303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.687536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.687588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.687758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.687787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.687958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.687984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.688140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.688187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.688353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.688383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.688515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.688544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.688740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.688766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.688891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.688918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.689065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.689091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.689207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.689250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.689403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.689429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.689571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.689597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.689800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.689828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.690013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.690040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.690190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.690216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.690338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.690364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.690517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.690543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.690760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.690789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.690956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.690983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.691139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.691165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.691316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.691344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.691513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.691543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.691713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.691739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.691909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.691953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.692075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.692101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.692258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.692284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.692428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.692454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.692649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.692678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.692811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.692840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.693033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.693060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.693212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.693242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.693383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.693409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.693551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.693580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.693747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.693776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.693944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.693971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.694142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.694172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.694307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.694337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.694530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.113 [2024-07-13 08:20:55.694558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.113 qpair failed and we were unable to recover it. 00:34:04.113 [2024-07-13 08:20:55.694710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.694737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.694881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.694908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.695056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.695082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.695239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.695267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.695407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.695432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.695590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.695634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.695793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.695823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.695976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.696005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.696164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.696190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.696366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.696395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.696563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.696593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.696736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.696765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.696919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.696946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.697100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.697126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.697287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.697316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.697472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.697501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.697675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.697700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.697820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.697846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.698034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.698064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.698200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.698229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.698384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.698410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.698536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.698579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.698783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.698812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.698978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.699009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.699187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.699213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.699328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.699354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.699564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.699590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.699756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.699787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.699952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.699979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.700129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.700172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.700331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.700360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.700515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.700540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.700688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.700714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.700917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.700952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.701087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.701118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.701256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.701285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.701430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.701456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.701584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.701610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.701777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.701805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.701995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.702025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.702204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.702231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.114 [2024-07-13 08:20:55.702411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.114 [2024-07-13 08:20:55.702437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.114 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.702612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.702642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.702832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.702861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.703037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.703064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.703184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.703210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.703356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.703385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.703585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.703611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.703755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.703781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.703913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.703958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.704127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.704157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.704316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.704345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.704484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.704511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.704661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.704688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.704844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.704882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.705019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.705048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.705187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.705213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.705337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.705363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.705499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.705525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.705682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.705711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.705859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.705892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.706045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.706089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.706244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.706270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.706424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.706450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.706599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.706625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.706772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.706799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.706921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.706947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.707109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.707138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.707334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.707359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.707518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.707545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.707673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.707700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.707963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.707994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.708174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.708201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.708373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.708399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.708548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.708576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.708733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.708761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.708917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.708943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.115 qpair failed and we were unable to recover it. 00:34:04.115 [2024-07-13 08:20:55.709096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.115 [2024-07-13 08:20:55.709122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.709268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.709294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.709419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.709445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.709598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.709624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.709772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.709798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.709974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.710003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.710153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.710185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.710332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.710358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.710517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.710545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.710720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.710765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.710952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.710981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.711133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.711160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.711354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.711383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.711548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.711577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.711748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.711775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.711939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.711967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.712165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.712194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.712367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.712393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.712538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.712564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.712710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.712742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.712921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.712951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.713114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.713150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.713343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.713372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.713551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.713577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.713747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.713783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.713949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.713978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.714146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.714175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.714375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.714401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.714645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.714698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.714877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.714904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.715055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.715081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.715213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.715239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.715368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.715410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.715602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.715630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.715791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.715820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.716006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.716033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.716179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.716208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.716376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.716405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.716581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.716611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.716781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.716807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.716987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.717017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.116 [2024-07-13 08:20:55.717187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.116 [2024-07-13 08:20:55.717216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.116 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.717380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.717409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.717565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.717592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.717721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.717747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.717909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.717936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.718074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.718102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.718279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.718305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.718485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.718549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.718717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.718748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.718926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.718956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.719132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.719158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.719312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.719355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.719517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.719544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.719694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.719722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.719850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.719880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.720042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.720071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.720281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.720307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.720476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.720505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.720662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.720689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.720903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.720936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.721141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.721167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.721400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.721429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.721581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.721607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.721786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.721812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.721985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.722012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.722160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.722203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.722373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.722399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.722518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.722560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.722750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.722778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.722939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.722969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.723135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.723161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.723273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.723327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.723528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.723554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.723732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.723759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.723877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.723904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.724027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.724054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.724252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.724281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.724451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.724480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.724637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.724664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.724819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.724845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.725027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.725056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.725250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.117 [2024-07-13 08:20:55.725277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.117 qpair failed and we were unable to recover it. 00:34:04.117 [2024-07-13 08:20:55.725421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.725447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.725567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.725613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.725769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.725798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.725987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.726017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.726217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.726243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.726422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.726472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.726630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.726663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.726806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.726837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.727086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.727113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.727433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.727494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.727684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.727712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.727880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.727919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.728103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.728131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.728445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.728512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.728721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.728748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.728892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.728930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.729162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.729189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.729386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.729413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.729650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.729679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.729856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.729892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.730056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.730082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.730221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xadb5b0 is same with the state(5) to be set 00:34:04.118 [2024-07-13 08:20:55.730488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.730532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.730680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.730717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.730862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.730898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.731016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.731043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.731189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.731217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.731402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.731428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.731558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.731585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.731743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.731770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.731908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.731935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.732088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.732126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.732305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.732335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.732530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.732556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.732750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.732780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.732962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.732989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.733174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.733201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.733458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.733511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.733680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.733709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.733913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.733940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.118 [2024-07-13 08:20:55.734135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.118 [2024-07-13 08:20:55.734165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.118 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.734332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.734361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.734545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.734572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.734727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.734763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.734927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.734955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.735111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.735143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.735312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.735342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.735528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.735555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.735733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.735761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.735941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.735968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.736123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.736151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.736365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.736392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.736567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.736597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.736793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.736823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.736976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.737003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.737151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.737194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.737362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.737391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.737537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.737563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.737723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.737749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.737907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.737934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.738087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.738114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.738248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.738277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.738423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.738450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.738603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.738634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.738820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.738850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.739030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.739058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.739188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.739215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.739366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.739398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.739519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.739545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.739701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.739728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.739921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.739964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.740118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.740144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.740326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.740353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.740520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.740550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.740742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.740771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.740947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.740974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.741129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.741158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.741315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.741345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.741543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.741570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.741741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.741770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.741923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.119 [2024-07-13 08:20:55.741950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.119 qpair failed and we were unable to recover it. 00:34:04.119 [2024-07-13 08:20:55.742125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.742152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.742273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.742300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.742449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.742476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.742632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.742659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.742834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.742863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.743057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.743084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.743244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.743270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.743395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.743437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.743627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.743657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.743816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.743843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.743988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.744015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.744161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.744191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.744340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.744366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.744521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.744548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.744722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.744749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.744952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.744979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.745171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.745200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.745371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.745400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.745553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.745580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.745764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.745791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.745985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.746013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.746144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.746171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.746295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.746326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.746517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.746546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.746692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.746720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.746880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.746908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.747059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.747087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.747224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.747251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.747413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.747443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.747610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.747640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.747809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.747836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.747988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.748016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.748180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.748207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.748363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.748391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.748681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.748733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.120 [2024-07-13 08:20:55.748924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.120 [2024-07-13 08:20:55.748952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.120 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.749085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.749123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.749321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.749350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.749519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.749548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.749746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.749773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.749961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.749989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.750142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.750171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.750358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.750385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.750559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.750588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.750778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.750808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.750993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.751020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.751169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.751195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.751399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.751428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.751594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.751620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.751826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.751859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.752007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.752037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.752187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.752214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.752389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.752416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.752572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.752601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.752775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.752802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.752924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.752951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.753130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.753157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.753307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.753333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.753488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.753515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.753692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.753719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.753884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.753918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.754069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.754096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.754270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.754300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.754506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.754533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.754663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.754690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.754845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.754903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.755042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.755070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.755223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.755268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.755457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.755486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.755635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.755662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.755819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.755846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.756042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.756072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.756226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.756254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.756454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.756483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.756639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.756669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.756820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.756847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.121 qpair failed and we were unable to recover it. 00:34:04.121 [2024-07-13 08:20:55.756988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.121 [2024-07-13 08:20:55.757032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.757204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.757235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.757406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.757433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.757631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.757660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.757813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.757840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.757996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.758023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.758165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.758194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.758333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.758364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.758532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.758559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.758756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.758786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.758985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.759015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.759193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.759219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.759404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.759431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.759603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.759637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.759815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.759841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.759971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.759998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.760148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.760175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.760352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.760378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.760496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.760523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.760668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.760694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.760864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.760895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.761050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.761076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.761254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.761283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.761461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.761487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.761665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.761692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.761857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.761892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.762056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.762082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.762205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.762248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.762442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.762471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.762636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.762662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.762860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.762897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.763065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.763095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.763303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.763329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.763483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.763510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.763661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.763704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.763896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.763923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.764097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.764127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.764320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.764349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.764546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.764572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.764742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.764771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.764924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.122 [2024-07-13 08:20:55.764954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.122 qpair failed and we were unable to recover it. 00:34:04.122 [2024-07-13 08:20:55.765130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.765156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.765353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.765382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.765546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.765576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.765723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.765751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.765899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.765942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.766135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.766164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.766308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.766335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.766480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.766523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.766720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.766746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.766903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.766930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.767105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.767135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.767325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.767352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.767504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.767534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.767704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.767733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.767897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.767927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.768083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.768109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.768263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.768305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.768499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.768529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.768671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.768698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.768848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.768897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.769068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.769097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.769239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.769266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.769414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.769461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.769610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.769637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.769762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.769789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.769984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.770015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.770181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.770210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.770377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.770404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.770582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.770626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.770758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.770788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.770994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.771021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.771164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.771193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.771400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.771426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.771543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.771569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.771766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.771795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.771963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.771994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.772136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.772163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.772311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.772338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.772487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.772518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.772727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.772754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.772907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.123 [2024-07-13 08:20:55.772952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.123 qpair failed and we were unable to recover it. 00:34:04.123 [2024-07-13 08:20:55.773126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.773155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.773333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.773360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.773482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.773528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.773665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.773694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.773841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.773873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.774051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.774081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.774230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.774259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.774423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.774449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.774620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.774650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.774814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.774843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.775021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.775047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.775191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.775222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.775403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.775432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.775609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.775636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.775780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.775809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.775951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.775981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.776191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.776218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.776347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.776376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.776534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.776563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.776745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.776771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.776942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.776971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.777176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.777205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.777384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.777411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.777590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.777619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.777841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.777875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.778021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.778048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.778227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.778255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.778375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.778402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.778549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.778576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.778749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.778776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.778919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.778949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.779111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.779137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.779288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.779314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.779463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.779489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.779671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.779698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.124 qpair failed and we were unable to recover it. 00:34:04.124 [2024-07-13 08:20:55.779854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.124 [2024-07-13 08:20:55.779887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.780033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.780059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.780217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.780244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.780400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.780427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.780606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.780632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.780777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.780803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.780951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.780979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.781100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.781127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.781298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.781325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.781440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.781466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.781642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.781672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.781836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.781863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.782025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.782052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.782205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.782233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.782413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.782440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.782614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.782641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.782786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.782817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.782998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.783026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.783209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.783236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.783382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.783412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.783594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.783621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.783775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.783802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.783924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.783952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.784111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.784143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.784319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.784349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.784508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.784538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.784738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.784765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.784918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.784946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.785073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.785100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.785279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.785306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.785456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.785484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.785653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.785683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.785836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.785863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.785993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.786020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.786173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.786199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.786314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.786341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.786494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.786521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.786724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.786754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.786908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.786936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.787086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.787123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.787276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.125 [2024-07-13 08:20:55.787303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.125 qpair failed and we were unable to recover it. 00:34:04.125 [2024-07-13 08:20:55.787456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.787483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.787605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.787632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.787754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.787782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.787932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.787960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.788111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.788144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.788271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.788314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.788483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.788513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.788679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.788708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.788876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.788923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.789096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.789122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.789253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.789280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.789408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.789436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.789621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.789648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.789824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.789855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.790031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.790058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.790213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.790244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.790396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.790423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.790551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.790578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.790738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.790767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.790925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.790952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.791108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.791145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.791290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.791317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.791495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.791522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.791673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.791700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.791911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.791938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.792091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.792118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.792242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.792270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.792399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.792427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.792602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.792629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.792759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.792786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.792909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.792934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.793086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.793121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.793249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.793277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.793456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.793483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.793654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.793697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.793859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.793894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.794063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.794090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.794244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.794271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.794421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.794447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.794595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.794621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.126 [2024-07-13 08:20:55.794780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.126 [2024-07-13 08:20:55.794806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.126 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.794966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.794992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.795114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.795141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.795288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.795314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.795437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.795465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.795619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.795647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.795827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.795855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.795998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.796025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.796180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.796208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.796390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.796421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.796587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.796617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.796788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.796815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.796964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.796991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.797176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.797204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.797379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.797406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.797588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.797619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.797768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.797798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.797950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.797978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.798154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.798190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.798337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.798366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.798559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.798586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.798786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.798812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.798964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.798991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.799141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.799168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.799343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.799371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.799498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.799526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.799679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.799710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.799833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.799861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.800018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.800045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.800202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.800229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.800414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.800441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.800594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.800621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.800769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.800796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.800980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.801007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.801215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.801245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.801411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.801438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.801588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.801616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.801736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.801763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.801919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.801947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.802095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.802123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.127 [2024-07-13 08:20:55.802268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.127 [2024-07-13 08:20:55.802295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.127 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.802449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.802477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.802652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.802683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.802855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.802891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.803027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.803054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.803203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.803230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.803395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.803421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.803575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.803601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.803752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.803779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.803935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.803962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.804109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.804145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.804323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.804350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.804506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.804533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.804677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.804704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.804859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.804891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.805072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.805106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.805246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.805275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.805455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.805482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.805636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.805663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.805789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.805816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.805940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.805967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.806116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.806154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.806280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.806307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.806459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.806486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.806605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.806633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.806785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.806812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.806963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.806990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.128 [2024-07-13 08:20:55.807141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.128 [2024-07-13 08:20:55.807168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.128 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.807321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.807348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.807508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.807536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.807655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.807682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.807863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.807921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.808104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.808144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.808324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.808354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.808531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.808558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.808713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.808740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.808855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.808889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.809047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.809074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.809206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.809233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.809367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.809394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.809544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.809572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.809742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.809772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.809976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.810006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.810184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.810212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.810365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.810392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.810569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.810600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.810737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.810764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.810927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.810954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.811133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.811160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.811285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.811312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.811490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.811517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.811694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.811721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.811873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.811901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.812075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.812101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.812257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.812285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.812437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.812468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.812614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.812641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.812791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.812826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.812984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.813011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.813132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.813159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.813305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.813349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.813513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.813540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.813691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.813718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.813898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.813951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.814101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.814138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.814316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.413 [2024-07-13 08:20:55.814343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.413 qpair failed and we were unable to recover it. 00:34:04.413 [2024-07-13 08:20:55.814466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.814493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.814636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.814663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.814836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.814863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.815018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.815048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.815238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.815265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.815392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.815419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.815597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.815625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.815835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.815873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.816042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.816069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.816226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.816252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.816403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.816430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.816578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.816606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.816765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.816792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.816948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.816976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.817118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.817145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.817290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.817316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.817491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.817532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.817745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.817791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.817953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.817981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.818166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.818195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.818399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.818445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.818617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.818665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.819073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.819103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.819328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.819358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.819503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.819533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.819668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.819699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.819857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.819907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.820032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.820059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.820219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.820264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.820426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.820463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.820622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.820652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.820849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.820883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.821046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.821073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.821236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.821264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.821500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.821553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.821722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.821752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.821933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.821960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.822113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.822142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.822294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.822321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.822474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.414 [2024-07-13 08:20:55.822501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.414 qpair failed and we were unable to recover it. 00:34:04.414 [2024-07-13 08:20:55.822690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.822719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.822876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.822921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.823072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.823098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.823266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.823298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.823518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.823562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.823741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.823787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.823969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.823996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.824194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.824221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.824374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.824411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.824573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.824601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.824781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.824810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.824968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.824996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.825181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.825226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.825393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.825439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.825609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.825655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.825789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.825817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.826058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.826104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.826318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.826349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.826518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.826547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.826858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.826923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.827125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.827151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.827282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.827309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.827467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.827493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.827697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.827726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.827852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.827890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.828082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.828108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.828306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.828336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.828507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.828535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.828678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.828705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.828880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.828919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.829083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.829110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.829415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.829476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.829622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.829652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.829791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.829821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.830061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.830090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.830294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.830324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.830522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.830572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.830714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.830741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.830903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.830931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.831108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.831135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.831317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.415 [2024-07-13 08:20:55.831386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.415 qpair failed and we were unable to recover it. 00:34:04.415 [2024-07-13 08:20:55.831551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.831580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.831779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.831808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.831947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.831979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.832125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.832152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.832380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.832407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.832719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.832775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.832953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.832979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.833130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.833157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.833361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.833390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.833566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.833596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.833733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.833763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.833933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.833959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.834105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.834142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.834298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.834339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.834631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.834684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.834852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.834888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.835098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.835124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.835376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.835428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.835557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.835587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.835777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.835806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.835961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.835988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.836118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.836145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.836269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.836296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.836543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.836596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.836734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.836765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.836949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.836976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.837151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.837178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.837348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.837377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.837544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.837606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.837807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.837836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.838013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.838040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.838154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.838180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.838380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.838409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.838575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.838604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.838738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.838767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.838935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.838961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.839086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.839112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.839269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.839296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.839473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.839502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.839668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.839697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.839839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.839874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.416 qpair failed and we were unable to recover it. 00:34:04.416 [2024-07-13 08:20:55.840044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.416 [2024-07-13 08:20:55.840070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.840253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.840282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.840450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.840479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.840653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.840684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.840851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.840888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.841034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.841061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.841280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.841332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.841502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.841532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.841677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.841707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.841880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.841913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.842062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.842089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.842218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.842245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.842439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.842468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.842661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.842691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.842857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.842893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.843055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.843082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.843240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.843267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.843419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.843447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.843594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.843620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.843737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.843764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.843911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.843938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.844058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.844084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.844264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.844291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.844419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.844446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.844600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.844626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.844754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.844782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.844978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.845008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.845186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.845212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.845388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.845415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.845588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.845623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.845795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.845822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.845995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.417 [2024-07-13 08:20:55.846022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.417 qpair failed and we were unable to recover it. 00:34:04.417 [2024-07-13 08:20:55.846184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.846210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.846329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.846355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.846531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.846557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.846683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.846710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.846859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.846913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.847060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.847087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.847245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.847272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.847429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.847455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.847601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.847627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.847831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.847860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.848014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.848041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.848200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.848233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.848432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.848461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.848633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.848660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.848814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.848841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.848998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.849024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.849184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.849211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.849337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.849363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.849517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.849544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.849693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.849719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.849856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.849895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.850085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.850111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.850286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.850313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.850464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.850501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.850658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.850685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.850837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.850864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.851064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.851094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.851257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.851287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.851450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.851477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.851639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.851669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.851830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.851860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.852083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.852110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.852238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.852265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.852410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.852437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.852579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.852606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.852733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.852761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.852939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.852984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.853149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.853176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.853349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.853380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.853511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.853538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.853691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.853717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.418 [2024-07-13 08:20:55.853961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.418 [2024-07-13 08:20:55.853992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.418 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.854121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.854150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.854296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.854324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.854441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.854468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.854596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.854633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.854767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.854794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.854959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.854989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.855155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.855182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.855327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.855354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.855528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.855555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.855744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.855773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.855922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.855950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.856097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.856123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.856250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.856275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.856420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.856446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.856596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.856622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.856777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.856803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.856977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.857004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.857142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.857171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.857341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.857369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.857560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.857587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.857730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.857756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.857904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.857931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.858055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.858081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.858195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.858225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.858390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.858419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.858583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.858609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.858784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.858810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.858940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.858967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.859109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.859135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.859252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.859295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.859482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.859511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.859671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.859697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.859925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.859952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.860100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.860126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.860269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.860295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.860449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.860475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.860634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.860661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.860823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.860849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.860985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.861013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.861186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.861213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.419 [2024-07-13 08:20:55.861362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.419 [2024-07-13 08:20:55.861388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.419 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.861538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.861565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.861716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.861743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.861897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.861924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.862105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.862141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.862366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.862398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.862561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.862587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.862732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.862758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.862903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.862932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.863063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.863103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.863255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.863283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.863467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.863496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.863685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.863715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.863876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.863913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.864072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.864099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.864250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.864277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.864432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.864475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.864630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.864660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.864905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.864932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.865120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.865150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.865313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.865342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.865484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.865510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.865658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.865685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.865816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.865843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.865967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.865998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.866153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.866180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.866367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.866393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.866543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.866570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.866719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.866745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.866896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.866924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.867069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.867096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.867248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.867291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.867458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.867484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.867661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.867687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.867816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.867858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.868032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.868061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.868200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.868228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.868387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.868416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.868572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.868599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.868775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.868802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.868985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.869018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.869188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.420 [2024-07-13 08:20:55.869218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.420 qpair failed and we were unable to recover it. 00:34:04.420 [2024-07-13 08:20:55.869395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.869421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.869547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.869576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.869753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.869780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.869909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.869937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.870089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.870116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.870260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.870287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.870436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.870463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.870583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.870611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.870778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.870806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.871023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.871066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.871202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.871232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.871421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.871450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.871665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.871711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.871935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.871964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.872173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.872203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.872508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.872567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.872733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.872761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.872943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.872990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.873135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.873162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.873338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.873365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.873570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.873616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.873824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.873872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.874035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.874074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.874256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.874300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.874535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.874579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.874747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.874778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.874954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.874987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.875178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.875208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.875377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.875413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.875586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.875617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.875812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.875839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.875972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.876000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.876126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.876154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.876319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.876350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.876506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.421 [2024-07-13 08:20:55.876536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.421 qpair failed and we were unable to recover it. 00:34:04.421 [2024-07-13 08:20:55.876699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.876731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.876903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.876948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.877072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.877099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.877283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.877311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.877489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.877519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.877691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.877721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.877924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.877965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.878133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.878179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.878320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.878351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.878517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.878548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.878831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.878899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.879066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.879094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.879300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.879358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.879682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.879735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.879921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.879949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.880103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.880130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.880283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.880311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.880536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.880591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.880779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.880808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.880988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.881016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.881170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.881214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.881472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.881524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.881752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.881782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.881961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.881989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.882123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.882149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.882302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.882331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.882550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.882602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.882795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.882830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.883013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.883041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.883218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.883245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.883472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.883524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.883689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.883719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.883926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.883954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.884108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.884136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.884349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.884412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.884580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.884610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.884778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.884808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.884958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.884987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.885120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.885147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.885302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.885330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.422 [2024-07-13 08:20:55.885573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.422 [2024-07-13 08:20:55.885625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.422 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.885810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.885837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.885967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.885994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.886150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.886178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.886355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.886385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.886632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.886682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.886881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.886909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.887062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.887090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.887285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.887315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.887463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.887490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.887641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.887669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.887841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.887874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.888030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.888057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.888250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.888280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.888476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.888549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.888716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.888744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.888923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.888951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.889077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.889105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.889227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.889254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.889433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.889460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.889638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.889667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.889863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.889895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.890020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.890052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.890233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.890260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.890440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.890468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.890645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.890687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.890830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.890857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.891053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.891085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.891261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.891289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.891419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.891446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.891599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.891626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.891802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.891830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.892023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.892054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.892203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.892229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.892408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.892436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.892590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.892617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.892766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.892793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.892965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.892995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.893162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.893192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.893336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.893363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.893539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.423 [2024-07-13 08:20:55.893566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.423 qpair failed and we were unable to recover it. 00:34:04.423 [2024-07-13 08:20:55.893729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.893757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.893911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.893939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.894074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.894104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.894302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.894332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.894492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.894520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.894707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.894734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.894893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.894920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.895065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.895092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.895282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.895312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.895466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.895493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.895671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.895698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.895820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.895847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.896031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.896058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.896188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.896216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.896335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.896362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.896515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.896542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.896693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.896720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.896837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.896873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.897055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.897083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.897234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.897261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.897410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.897437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.897600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.897630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.897805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.897832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.898002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.898030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.898179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.898206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.898359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.898386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.898552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.898582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.898747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.898778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.898942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.898969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.899147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.899175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.899298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.899326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.899480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.899508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.899706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.899736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.899926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.899957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.900134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.900161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.900305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.900333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.900484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.900511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.900656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.900683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.900837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.900890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.901014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.901044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.901242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.424 [2024-07-13 08:20:55.901269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.424 qpair failed and we were unable to recover it. 00:34:04.424 [2024-07-13 08:20:55.901394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.901421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.901571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.901597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.901776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.901803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.901956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.901984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.902169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.902197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.902316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.902344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.902521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.902548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.902701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.902739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.902879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.902906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.903055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.903082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.903257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.903284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.903465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.903492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.903655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.903686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.903844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.903879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.904029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.904056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.904178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.904205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.904386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.904416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.904557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.904585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.904739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.904766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.904906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.904935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.905084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.905111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.905231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.905259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.905380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.905408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.905532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.905559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.905716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.905743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.905895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.905922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.906077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.906104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.906252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.906279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.906432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.906461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.906631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.906658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.906841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.906874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.907030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.907057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.907199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.907226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.907401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.907428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.907582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.907609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.907787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.907815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.425 [2024-07-13 08:20:55.907958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.425 [2024-07-13 08:20:55.907986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.425 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.908164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.908194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.908394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.908421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.908581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.908608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.908758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.908785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.908933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.908960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.909112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.909139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.909261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.909289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.909469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.909496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.909608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.909635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.909755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.909782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.909918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.909945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.910064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.910090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.910236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.910263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.910405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.910432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.910604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.910634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.910770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.910805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.910954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.910982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.911099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.911126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.911301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.911328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.911479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.911506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.911657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.911685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.911873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.911901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.912048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.912075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.912229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.912257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.912403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.912430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.912601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.912628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.912743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.912770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.912925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.912953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.913131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.913158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.913290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.913317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.913471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.913498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.913647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.913674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.913791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.913818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.913963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.913991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.914170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.914197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.914370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.914399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.914574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.914604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.914772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.914799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.426 qpair failed and we were unable to recover it. 00:34:04.426 [2024-07-13 08:20:55.914953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.426 [2024-07-13 08:20:55.914980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.915107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.915134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.915252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.915278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.915423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.915450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.915670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.915697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.915847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.915880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.916029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.916056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.916207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.916234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.916409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.916436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.916557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.916584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.916735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.916762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.916914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.916942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.917070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.917098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.917249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.917276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.917422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.917449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.917624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.917652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.917828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.917855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.917985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.918016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.918188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.918215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.918365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.918392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.918567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.918594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.918765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.918795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.918968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.918999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.919158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.919185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.919328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.919355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.919506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.919533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.919679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.919707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.919853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.919888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.920045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.920071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.920247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.920274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.920447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.920473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.920630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.920657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.920786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.920813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.920926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.920952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.921074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.921101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.921245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.921271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.921445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.921491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.921662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.921692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.921859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.921891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.922019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.922046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.922227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.427 [2024-07-13 08:20:55.922254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.427 qpair failed and we were unable to recover it. 00:34:04.427 [2024-07-13 08:20:55.922403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.922431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.922610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.922638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.922758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.922785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.922938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.922965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.923115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.923158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.923348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.923377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.923548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.923574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.923738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.923767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.923950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.923979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.924156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.924184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.924338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.924366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.924539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.924566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.924742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.924768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.924914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.924943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.925093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.925119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.925264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.925291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.925410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.925441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.925588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.925615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.925793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.925820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.925984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.926012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.926163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.926191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.926315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.926343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.926495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.926521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.926643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.926670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.926845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.926879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.927055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.927082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.927261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.927292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.927465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.927493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.927644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.927670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.927822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.927849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.927986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.928015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.928195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.928222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.928413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.928444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.928611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.928638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.928786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.928814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.928991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.929022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.929216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.929243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.929398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.929425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.929602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.929629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.929783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.929810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.428 [2024-07-13 08:20:55.929993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.428 [2024-07-13 08:20:55.930021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.428 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.930199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.930226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.930376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.930403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.930538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.930565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.930688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.930715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.930893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.930920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.931069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.931097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.931269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.931299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.931480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.931507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.931625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.931652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.931783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.931812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.931943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.931971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.932124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.932151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.932325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.932355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.932554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.932581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.932734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.932763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.932909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.932942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.933066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.933093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.933248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.933275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.933475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.933505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.933695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.933722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.933901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.933936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.934061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.934087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.934236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.934262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.934429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.934460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.934649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.934679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.934889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.934934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.935112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.935139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.935289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.935316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.935497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.935524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.935703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.935730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.935845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.935879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.936033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.936061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.936209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.936236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.936440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.936470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.936660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.936687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.936837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.936872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.937019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.937047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.937225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.937252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.937446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.937476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.937640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.937669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.429 qpair failed and we were unable to recover it. 00:34:04.429 [2024-07-13 08:20:55.937810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.429 [2024-07-13 08:20:55.937837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.937998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.938026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.938184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.938211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.938342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.938370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.938499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.938526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.938678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.938706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.938851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.938888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.939029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.939056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.939252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.939282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.939440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.939467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.939619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.939646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.939773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.939800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.939954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.939983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.940137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.940164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.940313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.940355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.940528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.940559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.940715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.940742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.940920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.940948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.941127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.941154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.941332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.941362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.941538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.941565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.941717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.941744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.941896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.941923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.942043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.942071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.942249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.942275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.942396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.942423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.942550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.942577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.942710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.942737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.942885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.942913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.943109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.943138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.943310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.943337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.943467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.943494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.943611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.943638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.943819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.943847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.944050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.944082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.430 [2024-07-13 08:20:55.944284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.430 [2024-07-13 08:20:55.944314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.430 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.944475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.944501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.944651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.944678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.944800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.944827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.944988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.945015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.945169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.945196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.945374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.945402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.945561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.945588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.945710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.945737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.945891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.945919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.946096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.946133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.946315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.946342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.946496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.946523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.946666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.946693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.946879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.946906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.947079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.947109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.947284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.947311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.947434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.947461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.947639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.947666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.947788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.947815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.947942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.947973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.948128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.948181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.948376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.948404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.948577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.948604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.948781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.948808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.948967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.948994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.949139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.949177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.949379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.949406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.949594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.949620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.949742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.949770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.949943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.949971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.950144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.950170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.950322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.950366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.950508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.950538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.950675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.950702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.950880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.950908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.951071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.951098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.951218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.951245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.951388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.951415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.951567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.951594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.951759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.431 [2024-07-13 08:20:55.951786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.431 qpair failed and we were unable to recover it. 00:34:04.431 [2024-07-13 08:20:55.951964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.951992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.952192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.952222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.952396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.952428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.952551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.952578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.952726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.952753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.952880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.952908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.953062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.953089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.953279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.953309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.953480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.953507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.953654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.953680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.953810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.953842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.954046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.954087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.954249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.954279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.954489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.954538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.954775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.954828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.954993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.955022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.955201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.955247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.955419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.955450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.955602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.955652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.955822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.955857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.956052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.956093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.956275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.956304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.956462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.956489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.956767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.956818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.957004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.957032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.957179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.957206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.957494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.957548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.957749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.957778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.957955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.957983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.958107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.958144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.958323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.958351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.958478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.958524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.958689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.958720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.958892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.958947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.959091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.959118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.959306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.959383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.959526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.959556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.959745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.959774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.959979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.960007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.960143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.960173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.960343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.432 [2024-07-13 08:20:55.960373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.432 qpair failed and we were unable to recover it. 00:34:04.432 [2024-07-13 08:20:55.960613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.960666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.960858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.960896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.961065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.961093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.961243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.961271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.961464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.961494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.961672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.961709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.961879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.961925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.962077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.962112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.962264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.962291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.962524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.962582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.962772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.962802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.963000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.963028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.963190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.963226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.963382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.963409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.963584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.963612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.963793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.963820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.963987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.964027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.964195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.964225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.964403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.964450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.964610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.964640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.964799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.964829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.964973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.965001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.965157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.965184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.965386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.965416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.965586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.965616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.965752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.965782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.965955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.965983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.966109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.966137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.966314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.966357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.966497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.966526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.966753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.966783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.966959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.966988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.967154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.967181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.967353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.967415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.967592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.967622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.967815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.967845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.968051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.968079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.968254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.968284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.968422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.968452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.968704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.968734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.968914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.433 [2024-07-13 08:20:55.968941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.433 qpair failed and we were unable to recover it. 00:34:04.433 [2024-07-13 08:20:55.969091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.969118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.969246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.969273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.969424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.969450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.969624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.969651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.969823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.969864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.970062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.970093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.970256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.970301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.970447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.970492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.970671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.970718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.970935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.970982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.971191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.971236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.971417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.971462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.971655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.971684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.971874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.971907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.972087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.972146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.972301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.972346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.972596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.972642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.972777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.972813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.973020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.973067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.973209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.973254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.973459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.973505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.973708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.973736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.973889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.973927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.974113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.974161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.974336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.974380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.974514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.974542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.974723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.974754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.974924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.974980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.975183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.975227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.975411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.975443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.975612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.975643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.975798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.975825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.975988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.976032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.976199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.976228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.434 [2024-07-13 08:20:55.976421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.434 [2024-07-13 08:20:55.976451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.434 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.976606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.976636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.976791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.976821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.976992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.977020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.977192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.977222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.977369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.977413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.977582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.977612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.977775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.977805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.977982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.978010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.978172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.978199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.978417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.978447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.978589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.978618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.978780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.978807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.978955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.978983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.979161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.979188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.979430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.979484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.979678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.979708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.979877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.979921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.980075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.980102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.980279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.980309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.980473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.980503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.980646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.980676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.980850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.980902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.981058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.981089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.981216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.981260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.981436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.981478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.981644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.981673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.981845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.981889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.982039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.982067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.982222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.982266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.982435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.982466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.982657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.982687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.982837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.982863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.983054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.983081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.983232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.983262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.983421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.983450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.983641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.983670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.983880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.983924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.984080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.984107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.984263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.984290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.984475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.984501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.435 qpair failed and we were unable to recover it. 00:34:04.435 [2024-07-13 08:20:55.984672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.435 [2024-07-13 08:20:55.984701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.984891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.984919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.985065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.985091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.985282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.985308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.985552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.985603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.985802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.985832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.986010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.986038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.986195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.986222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.986401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.986428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.986608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.986639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.986831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.986861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.987065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.987093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.987273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.987302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.987668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.987718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.987924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.987951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.988086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.988112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.988262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.988305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.988496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.988525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.988687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.988727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.988907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.988934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.989087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.989114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.989259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.989287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.989482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.989516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.989656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.989686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.989888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.989934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.990085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.990113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.990233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.990259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.990435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.990461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.990732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.990763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.990955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.990982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.991162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.991189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.991315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.991343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.991514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.991543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.991709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.991738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.991923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.991952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.992106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.992133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.992266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.992293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.992453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.992483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.992623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.992667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.992833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.992860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.993020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.436 [2024-07-13 08:20:55.993046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.436 qpair failed and we were unable to recover it. 00:34:04.436 [2024-07-13 08:20:55.993199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.993226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.993376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.993403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.993556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.993585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.993752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.993782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.993953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.993981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.994133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.994159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.994279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.994307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.994488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.994518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.994675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.994702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.994829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.994857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.995041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.995068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.995212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.995239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.995394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.995422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.995545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.995572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.995726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.995754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.995900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.995927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.996102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.996132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.996328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.996355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.996510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.996538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.996715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.996742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.996893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.996920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.997083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.997113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.997317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.997344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.997491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.997518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.997660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.997686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.997828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.997855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.998018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.998045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.998212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.998241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.998373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.998402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.998571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.998598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.998717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.998744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.998918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.998946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.999092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.999119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.999291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.999317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.999476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.999504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.999682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.999709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:55.999859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:55.999894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:56.000042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:56.000071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:56.000220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:56.000247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:56.000435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:56.000462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.437 [2024-07-13 08:20:56.000603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.437 [2024-07-13 08:20:56.000630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.437 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.000752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.000785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.000965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.001009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.001179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.001209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.001370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.001398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.001547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.001575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.001688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.001719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.001839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.001881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.001997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.002045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.002202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.002232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.002402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.002429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.002573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.002600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.002743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.002770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.002917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.002945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.003070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.003097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.003292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.003322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.003483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.003510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.003683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.003710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.003861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.003896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.004072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.004100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.004244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.004272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.004397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.004424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.004600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.004627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.004743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.004771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.004887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.004914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.005059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.005086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.005198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.005225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.005381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.005408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.005530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.005557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.005733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.005760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.005908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.005936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.006112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.006140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.006310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.006341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.006499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.006530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.006695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.006722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.438 [2024-07-13 08:20:56.006889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.438 [2024-07-13 08:20:56.006916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.438 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.007068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.007098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.007275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.007302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.007424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.007451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.007590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.007617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.007738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.007766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.007939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.007967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.008117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.008159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.008306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.008332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.008445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.008472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.008622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.008649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.008850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.008886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.009063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.009091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.009235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.009266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.009451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.009478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.009628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.009656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.009830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.009857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.010026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.010055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.010237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.010264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.010446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.010473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.010649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.010677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.010829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.010856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.011015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.011046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.011215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.011242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.011407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.011438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.011600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.011630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.011773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.011801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.011928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.011956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.012131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.012158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.012285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.012312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.012474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.012504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.012690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.012720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.012879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.012907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.013086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.013113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.013265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.013292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.013467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.013494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.013636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.013663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.013820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.013848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.014017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.014045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.014199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.014226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.014365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.439 [2024-07-13 08:20:56.014392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.439 qpair failed and we were unable to recover it. 00:34:04.439 [2024-07-13 08:20:56.014566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.014593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.014717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.014745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.014892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.014920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.015068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.015096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.015243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.015270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.015440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.015467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.015613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.015641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.015772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.015799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.015935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.015963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.016142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.016169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.016319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.016346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.016502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.016529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.016642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.016674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.016796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.016823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.016977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.017004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.017154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.017181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.017330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.017357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.017511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.017538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.017657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.017685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.017831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.017859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.018043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.018070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.018199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.018227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.018373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.018400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.018549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.018577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.018706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.018733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.018892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.018920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.019096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.019126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.019276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.019303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.019448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.019475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.019652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.019679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.019853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.019889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.020044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.020071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.020247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.020278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.020445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.020471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.020622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.020648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.020798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.020824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.020984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.021012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.021162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.021206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.021373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.021403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.021537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.021564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.440 [2024-07-13 08:20:56.021723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.440 [2024-07-13 08:20:56.021750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.440 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.021926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.021955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.022102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.022129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.022282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.022309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.022433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.022461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.022638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.022665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.022818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.022845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.023001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.023029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.023208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.023234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.023386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.023413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.023541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.023568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.023721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.023748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.023871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.023903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.024058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.024085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.024236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.024262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.024439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.024466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.024610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.024638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.024762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.024790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.024950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.024978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.025121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.025150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.025345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.025372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.025498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.025526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.025680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.025707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.025857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.025893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.026021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.026065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.026204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.026233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.026407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.026434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.026610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.026637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.026790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.026818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.026959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.026987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.027135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.027162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.027288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.027315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.027467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.441 [2024-07-13 08:20:56.027494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.441 qpair failed and we were unable to recover it. 00:34:04.441 [2024-07-13 08:20:56.027649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.027676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.027803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.027830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.028010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.028038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.028211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.028241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.028374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.028404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.028545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.028573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.028734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.028761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.028886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.028914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.029041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.029069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.029246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.029273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.029395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.029422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.029599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.029626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.029777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.029805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.029962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.029990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.030147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.030174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.030290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.030317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.030468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.030495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.030622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.030649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.030796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.030823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.030984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.031016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.031197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.031224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.031373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.031400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.031557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.031584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.031727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.031754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.031879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.031907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.032100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.032130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.032296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.032323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.032448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.032476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.032626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.032664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.032819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.032846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.032980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.033008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.033183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.033210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.033331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.033358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.033488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.033516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.033699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.033726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.033904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.033933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.034076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.034103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.034251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.034279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.034437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.034464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.034587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.034614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.442 [2024-07-13 08:20:56.034763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.442 [2024-07-13 08:20:56.034790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.442 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.034946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.034974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.035119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.035146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.035356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.035386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.035586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.035614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.035765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.035793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.035960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.035988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.036113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.036141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.036262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.036288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.036465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.036507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.036711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.036739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.036886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.036913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.037090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.037117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.037242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.037269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.037418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.037445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.037609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.037640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.037822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.037849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.038030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.038058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.038211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.038238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.038363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.038395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.038519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.038550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.038721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.038765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.038938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.038966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.039116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.039144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.039314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.039342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.039491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.039518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.039640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.039668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.039821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.039864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.040013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.040041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.040220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.040247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.040380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.040407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.040592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.040619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.040767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.040794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.040920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.040948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.041135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.041161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.041339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.041367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.041509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.041536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.041698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.041725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.041876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.041903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.042029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.042057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.042199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.042226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.443 [2024-07-13 08:20:56.042379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.443 [2024-07-13 08:20:56.042406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.443 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.043176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.043213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.043396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.043423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.043572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.043598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.043767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.043796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.043970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.043997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.044152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.044179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.044364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.044390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.044512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.044537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.044718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.044745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.044893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.044920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.045066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.045092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.045248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.045274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.045424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.045450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.045636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.045662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.045790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.045818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.045955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.045982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.046163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.046190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.046347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.046377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.046503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.046528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.046652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.046679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.046819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.046848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.047047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.047073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.047225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.047252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.047407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.047432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.047554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.047580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.047708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.047734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.047923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.047949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.048094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.048119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.048314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.048344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.048513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.048539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.048691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.048717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.048878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.048915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.049095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.049121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.049275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.049301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.049453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.049481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.049639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.049665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.049826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.049855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.050039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.050066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.050222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.050248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.050424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.050453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.444 [2024-07-13 08:20:56.050582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.444 [2024-07-13 08:20:56.050610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.444 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.050774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.050803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.050977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.051004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.051147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.051173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.051370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.051400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.051533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.051561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.051748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.051777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.051958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.051985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.052138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.052164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.052316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.052342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.052529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.052555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.052712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.052738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.052922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.052949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.053101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.053138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.053337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.053366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.053538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.053564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.053684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.053711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.053871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.053903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.054054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.054080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.054225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.054251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.054414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.054441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.054589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.054614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.054790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.054816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.055019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.055045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.055198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.055224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.055351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.055377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.055531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.055573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.055741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.055766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.055919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.055946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.056100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.056126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.056306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.056332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.056513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.056542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.056688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.056717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.056887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.056913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.057089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.445 [2024-07-13 08:20:56.057114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.445 qpair failed and we were unable to recover it. 00:34:04.445 [2024-07-13 08:20:56.057242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.057269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.057398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.057424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.057591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.057619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.057786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.057815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.057969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.057995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.058142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.058177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.058324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.058350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.058480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.058507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.058701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.058730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.058943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.058970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.059116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.059142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.059272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.059299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.059475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.059501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.059649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.059675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.059820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.059846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.059994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.060020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.060180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.060206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.060354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.060380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.060536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.060562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.060711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.060736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.060907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.060934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.061061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.061088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.061237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.061267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.061421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.061448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.061645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.061675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.061846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.061883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.062040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.062065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.062287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.062316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.062487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.062513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.062660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.062685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.062836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.062861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.063049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.063075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.063229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.063255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.063438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.063464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.063589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.063615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.063740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.063766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.063928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.063971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.064134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.064160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.064309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.064335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.064563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.064589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.064768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.446 [2024-07-13 08:20:56.064794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.446 qpair failed and we were unable to recover it. 00:34:04.446 [2024-07-13 08:20:56.064944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.064971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.065149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.065178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.065352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.065378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.065495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.065521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.065679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.065706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.065830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.065856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.066043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.066069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.066223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.066249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.066371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.066396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.066564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.066594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.066752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.066781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.066940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.066968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.067124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.067150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.067301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.067327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.067476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.067502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.067668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.067696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.067840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.067875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.068034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.068060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.068218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.068261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.068428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.068457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.068657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.068683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.068812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.068842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.069005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.069031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.069177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.069202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.069323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.069348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.069497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.069523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.069671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.069697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.069834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.069864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.070056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.070085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.070233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.070259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.070433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.070476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.070647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.070678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.070847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.070886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.071024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.071053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.071255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.071281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.071469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.071495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.071631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.071660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.071796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.071824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.072012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.072039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.072195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.072220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.447 [2024-07-13 08:20:56.072368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.447 [2024-07-13 08:20:56.072394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.447 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.072514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.072539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.072730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.072759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.072957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.072987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.073126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.073152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.073337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.073365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.073534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.073563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.073711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.073736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.073896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.073933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.074113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.074142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.074340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.074366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.074513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.074542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.074708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.074736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.074907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.074933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.075072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.075098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.075291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.075320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.075470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.075496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.075613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.075639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.075819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.075845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.076004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.076030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.076198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.076226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.076392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.076425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.076568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.076593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.076754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.076779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.076921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.076951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.077136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.077163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.077311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.077340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.077505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.077534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.077713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.077738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.077874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.077919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.078120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.078149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.078310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.078335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.078498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.078527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.078698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.078727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.078932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.078959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.079159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.079188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.079346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.079374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.079565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.079591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.079738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.079765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.079924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.079967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.080143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.080170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.448 [2024-07-13 08:20:56.080360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.448 [2024-07-13 08:20:56.080389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.448 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.080567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.080593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.080744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.080770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.080919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.080962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.081112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.081141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.081345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.081371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.081499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.081525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.081659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.081685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.081857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.081889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.082012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.082038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.082156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.082182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.082359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.082385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.082533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.082563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.082722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.082751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.082892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.082918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.083068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.083112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.083279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.083307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.083502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.083527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.083695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.083724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.083934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.083963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.084109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.084139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.084270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.084296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.084428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.084454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.084634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.084659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.084832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.084861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.085050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.085080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.085246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.085271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.085389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.085415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.085597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.085626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.085773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.085799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.085979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.086023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.086165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.086193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.086359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.086385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.086506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.086533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.086692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.086718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.086893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.086919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.087100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.087129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.087319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.087347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.449 qpair failed and we were unable to recover it. 00:34:04.449 [2024-07-13 08:20:56.087484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.449 [2024-07-13 08:20:56.087509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.087658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.087684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.087808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.087834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.088018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.088044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.088186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.088212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.088361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.088387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.088529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.088554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.088677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.088721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.088890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.088929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.089103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.089129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.089288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.089316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.089482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.089511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.089671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.089697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.089820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.089861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.090078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.090107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.090300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.090326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.090496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.090525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.090654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.090683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.090858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.090891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.091062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.091092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.091290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.091319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.091519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.091544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.091743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.091772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.091998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.092027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.092161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.092186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.092378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.092406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.092572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.092601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.092799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.092824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.092981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.093007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.093203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.093232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.093437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.093462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.093634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.093662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.093854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.093890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.094060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.094087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.094243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.094271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.094411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.094439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.094591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.094616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.094808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.094837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.450 qpair failed and we were unable to recover it. 00:34:04.450 [2024-07-13 08:20:56.095062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.450 [2024-07-13 08:20:56.095091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.095243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.095268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.095396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.095441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.095631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.095660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.095831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.095857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.096033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.096063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.096226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.096255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.096388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.096414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.096546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.096572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.096720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.096746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.096927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.096954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.097141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.097174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.097341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.097370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.097538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.097564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.097728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.097757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.097924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.097953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.098126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.098152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.098317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.098346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.098522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.098548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.098689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.098715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.098887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.098916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.099112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.099141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.099309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.099335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.099533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.099562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.099728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.099756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.099936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.099963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.100131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.100159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.100297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.100326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.100491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.100517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.100679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.100707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.100902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.100932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.101104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.101129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.101300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.101328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.101522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.101550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.101748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.101774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.101916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.101945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.102110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.102139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.102302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.102327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.102504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.102534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.102727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.102756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.102966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.451 [2024-07-13 08:20:56.102993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.451 qpair failed and we were unable to recover it. 00:34:04.451 [2024-07-13 08:20:56.103161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.103189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.103324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.103353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.103546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.103572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.103716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.103746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.103940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.103967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.104122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.104148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.104345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.104373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.104516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.104545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.104706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.104731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.104895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.104924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.105065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.105098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.105245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.105271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.105424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.105450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.105603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.105629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.105756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.105782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.105907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.105933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.106113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.106139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.106295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.106321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.106449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.106475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.106628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.106654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.106842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.106874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.107026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.107052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.107232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.107276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.107422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.107447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.107624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.107653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.107788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.107817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.107971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.107998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.108174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.108200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.108351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.108377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.108534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.108560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.108711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.108738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.108914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.108940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.109096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.109123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.109245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.109272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.109424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.109450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.109629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.109655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.109805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.109830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.109985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.110011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.110158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.110184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.110360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.110386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.110530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.452 [2024-07-13 08:20:56.110556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.452 qpair failed and we were unable to recover it. 00:34:04.452 [2024-07-13 08:20:56.110711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.110738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.110907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.110934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.111064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.111091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.111274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.111300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.111448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.111473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.111625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.111651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.111829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.111855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.111984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.112011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.112139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.112165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.112304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.112334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.112530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.112559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.112715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.112741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.112873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.112901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.113056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.113082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.113211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.113237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.113387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.113413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.113529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.113555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.113732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.113758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.113907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.113933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.114145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.114173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.114344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.114373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.114503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.114529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.114649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.114674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.114859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.114891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.115053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.115078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.115253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.115279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.115438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.115467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.115613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.115639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.115801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.115826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.115981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.116010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.116185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.116211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.116335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.116362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.116580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.116608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.116782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.116807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.116936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.116963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.117115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.117156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.117311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.117335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.117483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.117525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.117687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.117714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.117893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.117919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.453 [2024-07-13 08:20:56.118090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.453 [2024-07-13 08:20:56.118119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.453 qpair failed and we were unable to recover it. 00:34:04.454 [2024-07-13 08:20:56.118286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.454 [2024-07-13 08:20:56.118315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.454 qpair failed and we were unable to recover it. 00:34:04.454 [2024-07-13 08:20:56.118462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.454 [2024-07-13 08:20:56.118489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.454 qpair failed and we were unable to recover it. 00:34:04.454 [2024-07-13 08:20:56.118641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.454 [2024-07-13 08:20:56.118687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.454 qpair failed and we were unable to recover it. 00:34:04.454 [2024-07-13 08:20:56.118851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.454 [2024-07-13 08:20:56.118886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.454 qpair failed and we were unable to recover it. 00:34:04.454 [2024-07-13 08:20:56.119063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.454 [2024-07-13 08:20:56.119089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.454 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.119218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.119244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.119399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.119425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.119578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.119611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.119780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.119812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.119957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.119984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.120142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.120169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.120296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.120322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.120454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.120481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.120636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.120662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.120856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.120903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.121043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.121073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.121278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.121304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.121438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.121467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.121660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.121689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.121852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.121885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.122066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.122094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.122291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.122320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.122506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.122532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.122646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.122672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.122822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.122849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.123036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.123062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.123200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.123229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.123371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.123400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.123579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.123604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.123745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.123771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.123953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.123982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.124135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.124160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.736 [2024-07-13 08:20:56.124283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.736 [2024-07-13 08:20:56.124309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.736 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.124456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.124482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.124642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.124667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.124820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.124846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.124978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.125004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.125178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.125204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.125352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.125377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.125526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.125570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.125765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.125791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.125962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.125991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.126169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.126196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.126371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.126397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.126562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.126590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.126786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.126811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.126972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.126998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.127196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.127224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.127387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.127420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.127568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.127594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.127786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.127814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.127979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.128009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.128179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.128205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.128371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.128400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.128571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.128599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.128794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.128820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.128947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.128974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.129099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.129125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.129327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.129353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.129523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.129551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.129718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.129747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.129889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.129916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.130074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.130118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.130322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.130350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.130494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.130520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.130643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.130669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.130824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.130850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.131006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.131031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.131191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.131220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.131392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.131421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.131618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.131643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.131789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.131819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.132008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.132035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.132159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.132185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.132361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.132404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.132541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.132570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.132744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.132769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.132924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.132969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.133168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.133194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.133337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.133362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.133514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.133559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.133692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.133722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.133920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.133947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.134076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.134102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.134248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.134273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.134417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.134443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.134606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.134635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.134833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.134858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.135017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.737 [2024-07-13 08:20:56.135047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.737 qpair failed and we were unable to recover it. 00:34:04.737 [2024-07-13 08:20:56.135211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.135240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.135409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.135435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.135611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.135636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.135807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.135835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.136027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.136056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.136204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.136229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.136402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.136444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.136616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.136642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.136788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.136814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.136986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.137013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.137216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.137245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.137446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.137471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.137644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.137672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.137809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.137838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.138040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.138067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.138256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.138284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.138451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.138479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.138675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.138701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.138845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.138881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.139053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.139081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.139250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.139275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.139419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.139445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.139576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.139602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.139784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.139810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.139984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.140013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.140182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.140211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.140419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.140445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.140605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.140634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.140796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.140825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.141004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.141030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.141179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.141208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.141377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.141406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.141578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.141603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.141766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.141794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.141963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.141992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.142160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.142186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.142334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.142378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.142511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.142540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.142714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.142740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.142859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.142913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.143079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.143108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.143281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.143306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.143473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.143501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.143666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.143695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.143896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.143923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.144118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.144147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.144313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.144343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.144513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.144538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.144735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.144764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.144973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.144999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.145173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.145198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.145406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.145431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.145611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.145637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.145798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.145824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.145979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.146005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.146199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.146227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.146372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.146399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.146593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.146622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.146760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.146789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.146957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.146983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.147153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.147182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.738 [2024-07-13 08:20:56.147373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.738 [2024-07-13 08:20:56.147401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.738 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.147566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.147592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.147742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.147786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.147951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.147981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.148145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.148170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.148355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.148398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.148578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.148606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.148729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.148755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.148882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.148910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.149065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.149091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.149250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.149277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.149450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.149479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.149626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.149656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.149860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.149901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.150031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.150058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.150203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.150233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.150406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.150432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.150596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.150624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.150789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.150817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.150974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.151001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.151144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.151169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.151298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.151324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.151441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.151466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.151668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.151696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.151858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.151892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.152095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.152120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.152262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.152289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.152487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.152515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.152658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.152684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.152841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.152888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.153088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.153116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.153273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.153299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.153450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.153481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.153633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.153676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.153847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.153879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.154011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.154038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.154195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.154221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.154397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.154422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.154573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.154599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.154793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.154822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.155005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.155031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.155198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.155226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.155421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.155450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.155617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.155642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.155768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.155808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.156006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.156033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.156183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.156209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.156377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.156406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.156536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.156565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.156711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.156737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.156941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.156985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.157176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.157203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.157354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.157380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.739 [2024-07-13 08:20:56.157521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.739 [2024-07-13 08:20:56.157546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.739 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.157736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.157765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.157917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.157944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.158113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.158141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.158283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.158311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.158506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.158531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.158676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.158706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.158887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.158914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.159045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.159071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.159196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.159238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.159377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.159406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.159573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.159598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.159734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.159761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.159892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.159918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.160067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.160092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.160216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.160240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.160393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.160420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.160539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.160565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.160707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.160731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.160896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.160925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.161131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.161157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.161374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.161399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.161602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.161630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.161773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.161799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.161995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.162025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.162193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.162221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.162391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.162415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.162564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.162607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.162799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.162827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.163008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.163035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.163186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.163211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.163390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.163419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.163581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.163606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.163777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.163806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.163991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.164017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.164138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.164163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.164325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.164352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.164514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.164542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.164689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.164715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.164864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.164898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.165027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.165053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.165170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.165195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.165305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.165331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.165483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.165508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.165658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.165682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.165851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.165887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.166066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.166092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.166272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.166297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.166491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.166519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.166665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.166693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.166875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.166907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.167032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.167058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.167246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.167272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.167420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.167447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.167613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.167641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.167834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.167862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.168040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.168065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.168219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.168245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.168401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.168426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.168569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.740 [2024-07-13 08:20:56.168594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.740 qpair failed and we were unable to recover it. 00:34:04.740 [2024-07-13 08:20:56.168713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.168742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.168909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.168938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.169107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.169133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.169321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.169349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.169509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.169537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.169732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.169757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.169929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.169958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.170124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.170152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.170325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.170350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.170520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.170548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.170743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.170771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.170923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.170950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.171138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.171163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.171337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.171365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.171541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.171566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.171725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.171753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.171929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.171969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.172121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.172147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.172337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.172365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.172534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.172563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.172733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.172759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.172883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.172926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.173101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.173130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.173329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.173355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.173545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.173573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.173748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.173776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.173959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.173985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.174165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.174193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.174367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.174395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.174538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.174564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.174715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.174740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.174917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.174947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.175117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.175143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.175292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.175317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.175492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.175520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.175701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.175730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.175892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.175934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.176056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.176081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.176234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.176260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.176466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.176495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.176667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.176692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.176847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.176883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.177066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.177092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.177251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.177276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.177448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.177473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.177621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.177646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.177798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.177824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.177973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.177999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.178165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.178193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.178345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.178370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.178486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.178511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.178666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.178691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.178876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.178906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.179034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.179059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.179229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.179256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.179424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.179452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.179634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.179659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.179822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.179850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.180017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.180043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.180189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.180214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.180402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.741 [2024-07-13 08:20:56.180430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.741 qpair failed and we were unable to recover it. 00:34:04.741 [2024-07-13 08:20:56.180626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.180651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.180824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.180849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.181027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.181055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.181186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.181214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.181411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.181436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.181586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.181611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.181783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.181811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.181985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.182014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.182140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.182166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.182339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.182379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.182545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.182570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.182739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.182767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.182925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.182955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.183106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.183131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.183274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.183319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.183481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.183509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.183684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.183709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.183908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.183936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.184093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.184121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.184256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.184281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.184434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.184459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.184616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.184641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.184804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.184829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.184981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.185006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.185185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.185226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.185401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.185426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.185577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.185602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.185754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.185797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.185939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.185965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.186115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.186160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.186353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.186381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.186550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.186576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.186730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.186774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.186954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.186981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.187162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.187187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.187360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.187388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.187590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.187620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.187778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.187803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.188004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.188033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.188230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.188256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.188371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.188398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.188549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.188596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.188815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.188841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.189022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.189049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.189211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.189239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.189438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.189475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.189615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.189641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.189826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.189854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.190072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.190101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.190220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.190251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.190405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.190446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.190583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.190611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.190753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.742 [2024-07-13 08:20:56.190777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.742 qpair failed and we were unable to recover it. 00:34:04.742 [2024-07-13 08:20:56.190929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.190957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.191109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.191135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.191287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.191312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.191463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.191488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.191638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.191663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.191824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.191849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.192030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.192072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.192214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.192242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.192432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.192457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.192590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.192614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.192745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.192770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.192922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.192947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.193122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.193147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.193299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.193324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.193507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.193532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.193698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.193726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.193927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.193953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.194127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.194152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.194267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.194292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.194414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.194439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.194599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.194624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.194798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.194825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.194965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.194999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.195171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.195197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.195323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.195364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.195565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.195593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.195759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.195784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.195930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.195956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.196162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.196190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.196362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.196389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.196512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.196554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.196746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.196774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.196971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.196997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.197157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.197185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.197372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.197400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.197569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.197594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.197788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.197816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.197984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.198013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.198163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.198190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.198340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.198382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.198569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.198594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.198747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.198772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.198962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.198992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.199157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.199186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.199360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.199385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.199511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.199558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.199770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.199795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.199916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.199942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.200121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.200146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.200326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.200351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.200502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.200527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.200651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.200675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.200856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.200892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.201048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.201073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.201214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.201239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.201381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.201409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.201559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.201585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.201778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.201806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.201943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.201971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.202142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.202167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.743 [2024-07-13 08:20:56.202321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.743 [2024-07-13 08:20:56.202349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.743 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.202490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.202517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.202683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.202707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.202900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.202936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.203077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.203106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.203249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.203274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.203403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.203428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.203579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.203604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.203784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.203809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.203952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.203981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.204162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.204188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.204334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.204359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.204529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.204557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.204720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.204748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.204896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.204921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.205051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.205075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.205230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.205271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.205417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.205442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.205561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.205588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.205764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.205793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.205976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.206002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.206129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.206155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.206363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.206391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.206592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.206617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.206755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.206782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.206918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.206948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.207123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.207149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.207275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.207300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.207453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.207478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.207641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.207665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.207813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.207859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.208040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.208068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.208209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.208234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.208361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.208386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.208511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.208536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.208710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.208736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.208856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.208889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.209080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.209108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.209306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.209331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.209460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.209485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.209628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.209653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.209827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.209853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.210051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.210079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.210211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.210239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.210407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.210432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.210593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.210621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.210803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.210829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.211018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.211046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.211219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.211247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.211383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.211411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.211595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.211620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.211760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.211788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.211951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.211980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.212149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.212175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.212297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.212336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.212499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.212527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.212724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.212750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.212919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.212947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.213114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.213143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.213341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.213366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.213518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.213543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.213726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.213751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.213874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.744 [2024-07-13 08:20:56.213899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.744 qpair failed and we were unable to recover it. 00:34:04.744 [2024-07-13 08:20:56.214069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.214112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.214279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.214307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.214486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.214511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.214662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.214704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.214898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.214928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.215098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.215123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.215284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.215312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.215530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.215583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.215771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.215803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.215971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.215997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.216242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.216291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.216490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.216515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.216660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.216701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.216870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.216899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.217059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.217084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.217249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.217277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.217410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.217439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.217600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.217625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.217777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.217802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.217930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.217955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.218110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.218135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.218259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.218284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.218443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.218468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.218590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.218615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.218728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.218752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.218920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.218959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.219090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.219118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.219268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.219296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.219472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.219498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.219660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.219687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.219840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.219872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.220051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.220078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.220203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.220229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.220376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.220401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.220531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.220556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.220713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.220742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.220884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.220910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.221040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.221065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.221243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.221267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.221414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.221438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.221548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.221571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.221687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.221711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.221859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.221889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.222037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.222061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.222211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.222237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.222382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.222406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.222562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.222587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.222761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.222785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.222971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.222996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.223149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.223175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.223369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.223398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.223698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.223750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.223956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.745 [2024-07-13 08:20:56.223983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.745 qpair failed and we were unable to recover it. 00:34:04.745 [2024-07-13 08:20:56.224110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.224136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.224310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.224372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.224641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.224691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.224894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.224920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.225093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.225121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.225314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.225342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.225548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.225576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.225763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.225788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.225912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.225937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.226133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.226161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.226320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.226348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.226562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.226618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.226789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.226814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.227011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.227042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.227234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.227262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.227460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.227511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.227671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.227697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.227838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.227887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.228093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.228118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.228296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.228324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.228613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.228661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.228829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.228857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.229057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.229085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.229277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.229309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.229497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.229549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.229742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.229767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.229891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.229935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.230126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.230155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.230373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.230401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.230543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.230569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.230732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.230757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.230925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.230955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.231093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.231136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.231276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.231302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.231478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.231503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.231675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.231700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.231881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.231907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.232041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.232066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.232237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.232262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.232475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.232538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.232740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.232765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.232898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.232923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.233099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.233127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.233310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.233337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.233571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.233625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.233797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.233821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.233979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.234007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.234166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.234194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.234451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.234505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.234683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.234708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.234833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.234862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.235038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.235064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.235240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.235265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.235416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.235441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.235596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.235621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.235737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.235762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.235893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.235919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.236101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.236127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.236296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.236324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.236490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.746 [2024-07-13 08:20:56.236518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.746 qpair failed and we were unable to recover it. 00:34:04.746 [2024-07-13 08:20:56.236687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.236712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.236862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.236907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.237096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.237123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.237281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.237311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.237558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.237607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.237755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.237781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.237935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.237961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.238105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.238133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.238296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.238324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.238488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.238516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.238685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.238710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.238870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.238902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.239050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.239075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.239220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.239246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.239440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.239468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.239615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.239641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.239768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.239794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.239936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.239966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.240134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.240160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.240358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.240386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.240548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.240573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.240724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.240751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.240950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.240980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.241173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.241201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.241352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.241380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.241521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.241546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.241695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.241719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.241875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.241917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.242132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.242160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.242345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.242372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.242541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.242566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.242740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.242769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.242909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.242939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.243134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.243162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.243453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.243502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.243670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.243695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.243855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.243898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.244018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.244042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.244188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.244213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.244365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.244390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.244568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.244593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.244773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.244798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.244944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.244970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.245117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.245142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.245294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.245319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.245438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.245462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.245592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.245617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.245748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.245773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.245953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.245979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.246125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.246150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.246300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.246325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.246444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.246467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.246614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.246640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.246796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.246821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.246938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.246965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.247110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.247136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.247282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.247307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.247434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.247459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.247605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.247634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.247806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.747 [2024-07-13 08:20:56.247831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.747 qpair failed and we were unable to recover it. 00:34:04.747 [2024-07-13 08:20:56.247959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.247985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.248113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.248138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.248291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.248316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.248471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.248497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.248632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.248660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.248854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.248887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.249038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.249063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.249209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.249234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.249383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.249408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.249579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.249604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.249723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.249748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.249924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.249950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.250094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.250133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.250319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.250345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.250480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.250506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.250656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.250681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.250837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.250863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.250996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.251023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.251171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.251198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.251317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.251343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.251497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.251523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.251701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.251726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.251882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.251930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.252124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.252152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.252319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.252346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.252616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.252675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.252874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.252917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.253060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.253086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.253218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.253244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.253395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.253420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.253568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.253594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.253769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.253795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.253972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.253998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.254197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.254236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.254392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.254419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.254570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.254596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.254723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.254749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.254878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.254910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.255092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.255118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.255246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.255271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.255425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.255450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.255601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.255626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.255751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.255776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.255953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.255978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.256102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.256127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.256279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.256304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.256458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.256482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.256629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.256655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.256808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.256833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.256959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.256985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.257140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.748 [2024-07-13 08:20:56.257165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.748 qpair failed and we were unable to recover it. 00:34:04.748 [2024-07-13 08:20:56.257314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.257339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.257510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.257539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.257684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.257709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.257852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.257886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.258010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.258035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.258163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.258188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.258338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.258363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.258480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.258504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.258660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.258687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.258839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.258874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.259033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.259060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.259209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.259235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.259420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.259445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.259570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.259595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.259744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.259770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.259951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.259977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.260128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.260153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.260300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.260326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.260500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.260525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.260634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.260658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.260811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.260837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.261023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.261049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.261202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.261227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.261425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.261453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.261592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.261635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.261832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.261857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.262056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.262084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.262292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.262320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.262511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.262536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.262658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.262683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.262828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.262853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.263013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.263040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.263196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.263221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.263397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.263422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.263562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.263586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.263708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.263733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.263895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.263921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.264074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.264099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.264226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.264252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.264402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.264426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.264573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.264598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.264753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.264778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.264931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.264960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.265084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.265109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.265259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.265284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.265467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.265491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.265639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.265664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.265814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.265840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.265971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.265996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.266179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.266207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.266387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.266415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.266603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.266632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.266817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.266845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.267014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.267040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.267205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.267230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.267353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.267378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.267529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.267554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.267703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.267728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.267849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.267883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.268011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.268036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.749 qpair failed and we were unable to recover it. 00:34:04.749 [2024-07-13 08:20:56.268213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.749 [2024-07-13 08:20:56.268238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.268362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.268387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.268538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.268563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.268683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.268708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.268863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.268896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.269083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.269109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.269263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.269288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.269429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.269456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.269598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.269623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.269814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.269842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.270039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.270067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.270272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.270323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.270481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.270509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.270694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.270722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.270890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.270923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.271095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.271122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.271284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.271309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.271486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.271511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.271654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.271679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.271807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.271831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.272003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.272030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.272179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.272204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.272360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.272385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.272545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.272571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.272727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.272753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.272876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.272903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.273065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.273090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.273247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.273272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.273448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.273473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.273623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.273648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.273796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.273821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.274011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.274039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.274216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.274244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.274422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.274450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.274643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.274671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.274846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.274883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.275060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.275089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.275320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.275348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.275524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.275581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.275774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.275802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.275956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.275985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.276184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.276210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.276328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.276353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.276525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.276549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.276702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.276727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.276854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.276887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.277063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.277088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.277248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.277273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.277386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.277412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.277590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.277615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.277791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.277820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.277997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.278022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.278171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.278196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.278368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.278393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.278569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.278593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.278754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.278781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.278915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.278942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.279129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.279153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.279284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.279309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.279463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.750 [2024-07-13 08:20:56.279488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.750 qpair failed and we were unable to recover it. 00:34:04.750 [2024-07-13 08:20:56.279641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.279666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.279810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.279835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.280028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.280054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.280219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.280244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.280399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.280423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.280575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.280600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.280748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.280773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.280921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.280947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.281075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.281100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.281257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.281282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.281436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.281460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.281616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.281641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.281786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.281811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.281966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.281991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.282117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.282142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.282285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.282309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.282459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.282484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.282636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.282661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.282842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.282875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.283007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.283033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.283216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.283241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.283394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.283419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.283570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.283595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.283773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.283799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.283980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.284006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.284178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.284207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.284444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.284496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.284685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.284713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.284855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.284890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.285048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.285072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.285225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.285250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.285393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.285422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.285547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.285572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.285739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.285766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.285909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.285954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.286116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.286141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.286298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.286323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.286437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.286462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.286639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.286664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.286814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.286840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.287005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.287031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.287157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.287182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.287356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.287381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.287531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.287556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.287705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.287730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.287892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.287918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.288072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.288098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.288222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.288247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.288392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.288417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.288570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.288595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.288772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.751 [2024-07-13 08:20:56.288798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.751 qpair failed and we were unable to recover it. 00:34:04.751 [2024-07-13 08:20:56.288973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.288999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.289117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.289142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.289293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.289318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.289463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.289489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.289643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.289667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.289786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.289811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.289991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.290017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.290171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.290200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.290317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.290342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.290505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.290533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.290745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.290772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.290921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.290965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.291185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.291213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.291408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.291436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.291605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.291631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.291756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.291781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.291981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.292009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.292177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.292205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.292399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.292427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.292613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.292641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.292842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.292878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.293060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.293085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.293268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.293294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.293413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.293438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.293612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.293637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.293751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.293776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.293954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.293980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.294100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.294126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.294279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.294304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.294458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.294485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.294660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.294686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.294801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.294826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.294993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.295021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.295169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.295194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.295373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.295398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.295551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.295576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.295772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.295799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.295935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.295979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.296162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.296187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.296360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.296385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.296529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.296554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.296674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.296699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.296824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.296849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.297012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.297037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.297220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.297245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.297417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.297442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.297591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.297616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.297783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.297812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.297957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.297986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.298108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.298132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.298278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.298303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.298447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.298472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.298617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.298641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.298766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.298792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.298914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.298941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.299088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.299113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.299291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.299317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.299439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.299464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.299578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.299603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.752 [2024-07-13 08:20:56.299777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.752 [2024-07-13 08:20:56.299803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.752 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.299957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.299984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.300133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.300158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.300319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.300345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.300501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.300526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.300672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.300697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.300843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.300875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.301026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.301050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.301222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.301250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.301423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.301448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.301622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.301650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.301861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.301896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.302042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.302067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.302220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.302245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.302399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.302424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.302550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.302575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.302726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.302755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.302887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.302915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.303071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.303096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.303244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.303269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.303416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.303440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.303596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.303622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.303764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.303789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.303937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.303963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.304090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.304116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.304292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.304317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.304439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.304464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.304656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.304684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.304853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.304886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.305043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.305068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.305227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.305253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.305425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.305450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.305599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.305626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.305781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.305807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.305959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.305985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.306100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.306125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.306277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.306302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.306459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.306484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.306632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.306657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.306808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.306833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.306984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.307011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.307131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.307156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.307296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.307321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.307440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.307465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.307597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.307622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.307812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.307840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.308015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.308045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.308210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.308235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.308383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.308408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.308555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.308580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.308755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.308780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.308916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.308942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.309061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.309086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.309215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.309240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.309421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.309445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.309590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.309615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.309732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.309757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.309936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.309967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.310091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.310117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.310261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.310287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.310429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.310454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.753 qpair failed and we were unable to recover it. 00:34:04.753 [2024-07-13 08:20:56.310604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.753 [2024-07-13 08:20:56.310629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.310785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.310810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.310968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.310994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.311144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.311170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.311299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.311326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.311458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.311484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.311661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.311686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.311813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.311838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.312001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.312028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.312177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.312202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.312350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.312375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.312554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.312579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.312765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.312790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.312919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.312945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.313094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.313120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.313299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.313324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.313450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.313475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.313624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.313650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.313825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.313849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.313977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.314002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.314178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.314204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.314356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.314380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.314512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.314537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.314716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.314745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.314902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.314929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.315083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.315108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.315292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.315317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.315434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.315475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.315660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.315687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.315896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.315923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.316050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.316077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.316198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.316224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.316366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.316396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.316575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.316603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.316770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.316795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.316965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.316995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.317135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.317178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.317338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.317368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.317520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.317546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.317695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.317721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.317836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.317861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.318049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.318074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.318246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.318272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.318413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.318438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.318588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.318629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.318850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.318887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.319058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.319082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.319210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.319236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.319358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.319383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.319553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.319581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.319751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.319776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.319954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.319981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.320105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.320130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.754 qpair failed and we were unable to recover it. 00:34:04.754 [2024-07-13 08:20:56.320290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.754 [2024-07-13 08:20:56.320316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.320490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.320515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.320667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.320692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.320847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.320880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.321034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.321059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.321178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.321203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.321333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.321357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.321507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.321532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.321684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.321709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.321860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.321894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.322042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.322067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.322217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.322246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.322401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.322427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.322604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.322629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.322774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.322802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.322953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.322980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.323152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.323181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.323395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.323424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.323588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.323616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.323820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.323848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.324048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.324076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.324244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.324270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.324441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.324466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.324615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.324640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.324831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.324859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.325056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.325082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.325229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.325254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.325403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.325429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.325581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.325606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.325759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.325784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.325931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.325957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.326102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.326127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.326275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.326300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.326479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.326505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.326633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.326658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.326810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.326835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.326991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.327017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.327170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.327195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.327339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.327364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.327533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.327559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.327730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.327758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.327984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.328014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.328214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.328239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.328382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.328407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.328518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.328543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.328685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.328713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.328879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.328911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.329066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.329092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.329244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.329269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.329409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.329434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.329560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.329585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.329767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.329795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.329982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.330011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.330225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.330253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.330512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.330566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.330765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.330790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.330940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.330966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.331150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.755 [2024-07-13 08:20:56.331176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.755 qpair failed and we were unable to recover it. 00:34:04.755 [2024-07-13 08:20:56.331295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.331321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.331444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.331469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.331600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.331626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.331778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.331803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.331981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.332007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.332126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.332151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.332324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.332349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.332477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.332502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.332707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.332735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.332898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.332925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.333078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.333103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.333256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.333281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.333425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.333450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.333607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.333632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.333778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.333803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.333949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.333975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.334091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.334117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.334270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.334296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.334441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.334466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.334590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.334616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.334766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.334792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.334907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.334937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.335087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.335112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.335257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.335282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.335430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.335455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.335598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.335623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.335774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.335817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.335990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.336016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.336170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.336195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.336316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.336341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.336515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.336540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.336713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.336738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.336895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.336922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.337099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.337125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.337277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.337302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.337458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.337483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.337659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.337685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.337813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.337838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.338009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.338035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.338185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.338210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.338334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.338359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.338510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.338536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.338711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.338736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.338882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.338908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.339068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.339093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.339236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.339261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.339412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.339437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.339617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.339642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.339764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.339789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.339942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.339967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.340140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.340168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.340356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.340384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.340592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.340620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.340836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.340875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.341037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.341062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.341229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.341256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.341559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.341609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.341770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.341800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.756 [2024-07-13 08:20:56.342004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.756 [2024-07-13 08:20:56.342030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.756 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.342181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.342206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.342326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.342351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.342465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.342491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.342647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.342673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.342825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.342850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.342978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.343003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.343130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.343155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.343272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.343297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.343448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.343473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.343644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.343672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.343888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.343931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.344057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.344082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.344207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.344232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.344346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.344370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.344516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.344541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.344665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.344691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.344845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.344879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.345046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.345072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.345227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.345253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.345382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.345407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.345535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.345560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.345672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.345698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.345851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.345890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.346043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.346068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.346220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.346245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.346423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.346448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.346589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.346614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.346771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.346797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.346950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.346976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.347147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.347172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.347324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.347372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.347516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.347541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.347707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.347735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.347940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.347966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.348096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.348121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.348244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.348269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.348395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.348420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.348529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.348554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.348708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.348734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.348895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.348928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.349107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.349133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.349282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.349307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.349432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.349458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.349606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.349631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.349756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.349781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.349929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.349955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.350078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.350103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.350278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.350303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.350447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.350472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.350649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.350674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.350822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.350847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.757 [2024-07-13 08:20:56.351012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.757 [2024-07-13 08:20:56.351038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.757 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.351157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.351182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.351300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.351325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.351468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.351494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.351683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.351711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.351884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.351910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.352046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.352072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.352229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.352254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.352414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.352439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.352558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.352583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.352725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.352750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.352892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.352919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.353096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.353122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.353275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.353300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.353450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.353475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.353590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.353615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.353764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.353789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.353908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.353933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.354085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.354111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.354265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.354291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.354416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.354445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.354622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.354648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.354774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.354799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.354974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.355000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.355155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.355181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.355307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.355333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.355483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.355508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.355675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.355703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.355875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.355901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.356049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.356074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.356226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.356252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.356374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.356399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.356552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.356578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.356768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.356793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.356925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.356952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.357104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.357129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.357286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.357312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.357458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.357483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.357654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.357680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.357833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.357860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.358033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.358059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.358233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.358258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.358436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.358461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.358573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.358599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.358773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.358801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.358980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.359006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.359152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.359177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.359298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.359327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.359477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.359502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.359647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.359672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.359791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.359816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.359977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.360004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.360124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.360149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.360266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.360291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.360469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.360495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.360642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.360667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.360815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.360840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.361034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.361061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.361174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.361199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.758 [2024-07-13 08:20:56.361313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.758 [2024-07-13 08:20:56.361338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.758 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.361497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.361522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.361670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.361696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.361849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.361890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.362070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.362096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.362242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.362267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.362442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.362467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.362623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.362649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.362820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.362845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.362968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.362993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.363143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.363168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.363339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.363365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.363512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.363537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.363687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.363712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.363858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.363891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.364066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.364091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.364243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.364269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.364446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.364471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.364624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.364651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.364766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.364792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.364938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.364965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.365141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.365166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.365318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.365344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.365491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.365515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.365641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.365666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.365845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.365877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.365989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.366013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.366140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.366166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.366287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.366312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.366428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.366456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.366628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.366652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.366803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.366828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.366984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.367010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.367155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.367180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.367299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.367324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.367499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.367524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.367669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.367694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.367847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.367879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.368030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.368055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.368197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.368222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.368398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.368423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.368565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.368590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.368734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.368760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.368913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.368940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.369084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.369110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.369281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.369309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.369538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.369594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.369821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.369848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.370037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.370063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.370217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.370242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.370388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.370413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.370562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.370587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.370739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.370764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.370919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.370944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.371061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.371086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.371241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.371266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.371412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.371441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.371619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.371645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.371790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.371815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.371964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.371990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.372113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.372138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.372313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.372338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.759 qpair failed and we were unable to recover it. 00:34:04.759 [2024-07-13 08:20:56.372514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.759 [2024-07-13 08:20:56.372539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.372685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.372709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.372885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.372913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.373061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.373086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.373235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.373262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.373413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.373439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.373594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.373619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.373757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.373787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.373992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.374018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.374188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.374216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.374427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.374454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.374645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.374673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.374808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.374832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.375017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.375043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.375195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.375220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.375369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.375394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.375509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.375534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.375662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.375702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.375862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.375894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.376044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.376069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.376223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.376248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.376412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.376436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.376616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.376641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.376797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.376822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.376978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.377005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.377149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.377175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.377320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.377345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.377496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.377520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.377668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.377693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.377815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.377841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.378019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.378046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.378221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.378246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.378398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.378423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.378567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.378592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.378756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.378784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.378960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.378990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.379144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.379171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.379319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.379344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.379497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.379522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.379673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.379698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.379821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.379846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.380027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.380053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.380201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.380226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.380375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.380400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.380525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.380550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.380721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.380749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.380923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.380951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.381098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.381124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.381298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.381323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.381479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.381504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.381657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.381682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.381860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.381892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.382044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.382069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.382197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.382222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.382345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.760 [2024-07-13 08:20:56.382370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.760 qpair failed and we were unable to recover it. 00:34:04.760 [2024-07-13 08:20:56.382517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.382541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.382669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.382694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.382815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.382840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.382972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.382997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.383115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.383140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.383289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.383314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.383458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.383483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.383653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.383687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.383890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.383916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.384078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.384103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.384259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.384285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.384463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.384487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.384639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.384664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.384845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.384879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.385043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.385069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.385220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.385246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.385395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.385421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.385566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.385591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.385743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.385768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.385925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.385952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.386100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.386125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.386282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.386307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.386463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.386489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.386638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.386663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.386833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.386861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.387023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.387048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.387197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.387222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.387366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.387391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.387514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.387541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.387686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.387711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.387835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.387860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.387990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.388016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.388131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.388156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.388306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.388334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.388485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.388511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.388653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.388681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.388863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.388918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.389075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.389100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.389221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.389246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.389399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.389423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.389573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.389598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.389722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.389757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.389945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.389971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.390110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.390136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.390285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.390310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.390441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.390466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.390591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.390624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.390826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.390854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.391058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.391088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.391207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.391232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.391365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.391389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.391521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.391548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.391667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.391693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.391876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.391911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.392067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.392093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.392244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.392269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.392411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.392437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.392566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.392592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.392735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.392772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.392984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.393011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.393167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.393194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.393346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.761 [2024-07-13 08:20:56.393372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.761 qpair failed and we were unable to recover it. 00:34:04.761 [2024-07-13 08:20:56.393526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.393552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.393703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.393730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.393860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.393894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.394066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.394096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.394240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.394267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.394424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.394452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.394601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.394627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.394796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.394831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.395019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.395045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.395224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.395255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.395404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.395429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.395576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.395601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.395784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.395809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.395953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.395981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.396136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.396161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.396300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.396325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.396449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.396474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.396591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.396616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.396766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.396791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.396921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.396947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.397119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.397144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.397319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.397343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.397486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.397511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.397665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.397692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.397857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.397921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.398073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.398098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.398261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.398287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.398442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.398468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.398621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.398647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.398767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.398792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.398948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.398975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.399117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.399142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.399300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.399325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.399473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.399498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.399615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.399640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.399767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.399792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.399944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.399971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.400148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.400173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.400321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.400346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.400493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.400519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.400654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.400682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.400878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.400936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.401089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.401115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.401263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.401288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.401410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.401435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.401564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.401588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.401734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.401759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.401912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.401938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.402077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.402101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.402255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.402280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.402433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.402458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.402599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.402624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.402774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.402800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.402958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.402984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.403160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.403189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.403314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.403340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.403521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.403547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.403679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.403705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.403862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.762 [2024-07-13 08:20:56.403901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.762 qpair failed and we were unable to recover it. 00:34:04.762 [2024-07-13 08:20:56.404028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.404053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.404169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.404194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.404341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.404366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.404490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.404515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.404661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.404686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.404833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.404858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.405065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.405090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.405233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.405258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.405440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.405465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.405588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.405613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.405757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.405782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.405902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.405929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.406075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.406101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.406250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.406275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.406416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.406441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.406569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.406595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.406746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.406771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.406916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.406941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.407062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.407087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.407278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.407303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.407477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.407502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.407653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.407678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.407819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.407844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.408057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.408083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.408218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.408244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.408368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.408393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.408519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.408544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.408720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.408745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.408898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.408923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.409038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.409063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.409218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.409244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.409419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.409443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.409590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.409615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.409733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.409759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.409912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.409938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.410059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.410084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.410219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.410247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.410398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.410423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.410547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.410573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.410725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.410750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.410911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.410937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.411085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.411110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.411237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.411263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.411419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.411444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.411618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.411643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.411800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.411825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.411990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.412018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.412145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.412170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.412350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.412375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.412506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.412531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.412679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.763 [2024-07-13 08:20:56.412704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.763 qpair failed and we were unable to recover it. 00:34:04.763 [2024-07-13 08:20:56.412854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.412894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.413070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.413095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.413208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.413233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.413388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.413413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.413565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.413590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.413712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.413738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.413919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.413946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.414121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.414146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.414318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.414343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.414490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.414516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.414690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.414718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.414853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.414884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.415062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.415091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.415236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.415261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.415419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.415443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.415623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.415648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.415792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.415817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.415990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.416017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.416172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.416198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.416370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.416394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.416542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.416567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.416747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.416772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.416920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.416946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.417096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.417121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.417303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.417327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.417447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.417472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.417623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.417648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.417814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.417842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.418022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.418047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.418202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.418227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.418406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.418431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.418577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.418602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.418728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.418753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.418926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.418952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.419129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.419154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.419301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.419326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.419478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.419504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.419646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.419671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.419795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.419820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.419983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.420011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.420163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.420189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.420339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.420365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.420535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.420564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.420725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.420750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.420928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.420954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.421102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.421127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.421277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.421302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.421417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.421442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.421593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.421618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.421792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.421816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.421946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.421973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.422133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.422158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.422312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.422337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.422490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.422520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.422637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.422662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.422781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.422806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.422985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.423011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.423157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.423182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.423331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.423356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.423533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.423557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.423708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.423733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.423862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.764 [2024-07-13 08:20:56.423901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.764 qpair failed and we were unable to recover it. 00:34:04.764 [2024-07-13 08:20:56.424023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.424050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.424199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.424224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.424374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.424399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.424517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.424543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.424691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.424716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.424901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.424926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.425077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.425103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.425261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.425286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.425439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.425464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.425641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.425666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.425819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.425844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.425970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.425999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.426119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.426144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.426282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.426307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.426454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.426479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.426638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.426663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.426810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.426835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.426983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.427008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.427183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.427213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.427393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.427417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.427565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.427590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.427714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.427739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.427854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.427895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.428074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.428099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.428220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.428246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.428363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.428388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.428538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.428563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.428734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.428762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.428957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.428984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.429105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.429130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.429251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.429277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.429458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.429484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.429635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.429660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.429786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.429811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.429986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.430012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.430151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.430178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.430345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.430373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.430563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.430587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.430705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.430730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.430881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.430907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.431049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.431074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.431216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.431240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.431390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.431415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.431588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.431613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.431734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.431759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.431910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.431937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.432097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.432123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.432276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.432302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.432455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.432481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.432596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.432621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.432819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.432847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.433002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.433028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.433155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.433181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.433329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.433355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.433506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.433531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.433654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.433679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.433853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.433887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.434019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.434044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.434221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.434246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.765 [2024-07-13 08:20:56.434367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.765 [2024-07-13 08:20:56.434396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.765 qpair failed and we were unable to recover it. 00:34:04.766 [2024-07-13 08:20:56.434547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.766 [2024-07-13 08:20:56.434572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.766 qpair failed and we were unable to recover it. 00:34:04.766 [2024-07-13 08:20:56.434751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.766 [2024-07-13 08:20:56.434776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.766 qpair failed and we were unable to recover it. 00:34:04.766 [2024-07-13 08:20:56.434918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.766 [2024-07-13 08:20:56.434944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.766 qpair failed and we were unable to recover it. 00:34:04.766 [2024-07-13 08:20:56.435060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.766 [2024-07-13 08:20:56.435085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.766 qpair failed and we were unable to recover it. 00:34:04.766 [2024-07-13 08:20:56.435233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.766 [2024-07-13 08:20:56.435259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.766 qpair failed and we were unable to recover it. 00:34:04.766 [2024-07-13 08:20:56.435379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.766 [2024-07-13 08:20:56.435404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.766 qpair failed and we were unable to recover it. 00:34:04.766 [2024-07-13 08:20:56.435549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.766 [2024-07-13 08:20:56.435574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.766 qpair failed and we were unable to recover it. 00:34:04.766 [2024-07-13 08:20:56.435696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.766 [2024-07-13 08:20:56.435721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.766 qpair failed and we were unable to recover it. 00:34:04.766 [2024-07-13 08:20:56.435906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.766 [2024-07-13 08:20:56.435932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.766 qpair failed and we were unable to recover it. 00:34:04.766 [2024-07-13 08:20:56.436089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.766 [2024-07-13 08:20:56.436115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.766 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.436263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.436288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.436434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.436459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.436603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.436629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.436780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.436805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.436957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.436984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.437109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.437134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.437307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.437332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.437475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.437499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.437654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.437680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.437851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.437902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.438044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.438069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.438201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.438226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.438369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.438394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.438567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.438592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.438722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.438748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.438926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.438952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.439098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.439126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.439306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.439331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.439452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.439477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.439600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.439624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.439788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.439813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.439963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.439990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.440174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.440199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.440375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.440400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.440550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.440575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.440708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.440733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.440855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.440895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.441045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.441070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.441220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.441255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.441422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.441449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.441607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.441632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.441785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.441810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.441960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.441986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.442161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.442188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.442339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.442364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.767 [2024-07-13 08:20:56.442516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.767 [2024-07-13 08:20:56.442541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.767 qpair failed and we were unable to recover it. 00:34:04.768 [2024-07-13 08:20:56.442692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.768 [2024-07-13 08:20:56.442724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.768 qpair failed and we were unable to recover it. 00:34:04.768 [2024-07-13 08:20:56.442905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.768 [2024-07-13 08:20:56.442932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.768 qpair failed and we were unable to recover it. 00:34:04.768 [2024-07-13 08:20:56.443050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.768 [2024-07-13 08:20:56.443077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.768 qpair failed and we were unable to recover it. 00:34:04.768 [2024-07-13 08:20:56.443193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.768 [2024-07-13 08:20:56.443218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.768 qpair failed and we were unable to recover it. 00:34:04.768 [2024-07-13 08:20:56.443342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.768 [2024-07-13 08:20:56.443367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.768 qpair failed and we were unable to recover it. 00:34:04.768 [2024-07-13 08:20:56.443532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.768 [2024-07-13 08:20:56.443576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.768 qpair failed and we were unable to recover it. 00:34:04.768 [2024-07-13 08:20:56.443765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.768 [2024-07-13 08:20:56.443790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.768 qpair failed and we were unable to recover it. 00:34:04.768 [2024-07-13 08:20:56.443968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.768 [2024-07-13 08:20:56.444002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.768 qpair failed and we were unable to recover it. 00:34:04.768 [2024-07-13 08:20:56.444165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.768 [2024-07-13 08:20:56.444192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.768 qpair failed and we were unable to recover it. 00:34:04.768 [2024-07-13 08:20:56.444341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:04.768 [2024-07-13 08:20:56.444366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:04.768 qpair failed and we were unable to recover it. 00:34:05.049 [2024-07-13 08:20:56.444513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.049 [2024-07-13 08:20:56.444539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.049 qpair failed and we were unable to recover it. 00:34:05.049 [2024-07-13 08:20:56.444693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.049 [2024-07-13 08:20:56.444719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.049 qpair failed and we were unable to recover it. 00:34:05.049 [2024-07-13 08:20:56.444846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.049 [2024-07-13 08:20:56.444877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.049 qpair failed and we were unable to recover it. 00:34:05.049 [2024-07-13 08:20:56.445027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.049 [2024-07-13 08:20:56.445052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.049 qpair failed and we were unable to recover it. 00:34:05.049 [2024-07-13 08:20:56.445230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.049 [2024-07-13 08:20:56.445255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.049 qpair failed and we were unable to recover it. 00:34:05.049 [2024-07-13 08:20:56.445429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.049 [2024-07-13 08:20:56.445453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.049 qpair failed and we were unable to recover it. 00:34:05.049 [2024-07-13 08:20:56.445573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.049 [2024-07-13 08:20:56.445598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.049 qpair failed and we were unable to recover it. 00:34:05.049 [2024-07-13 08:20:56.445779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.049 [2024-07-13 08:20:56.445804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.049 qpair failed and we were unable to recover it. 00:34:05.049 [2024-07-13 08:20:56.445990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.049 [2024-07-13 08:20:56.446017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.049 qpair failed and we were unable to recover it. 00:34:05.049 [2024-07-13 08:20:56.446139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.049 [2024-07-13 08:20:56.446166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.049 qpair failed and we were unable to recover it. 00:34:05.049 [2024-07-13 08:20:56.446318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.049 [2024-07-13 08:20:56.446343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.049 qpair failed and we were unable to recover it. 00:34:05.049 [2024-07-13 08:20:56.446521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.049 [2024-07-13 08:20:56.446552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.049 qpair failed and we were unable to recover it. 00:34:05.049 [2024-07-13 08:20:56.446705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.049 [2024-07-13 08:20:56.446731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.049 qpair failed and we were unable to recover it. 00:34:05.049 [2024-07-13 08:20:56.446914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.049 [2024-07-13 08:20:56.446941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.049 qpair failed and we were unable to recover it. 00:34:05.049 [2024-07-13 08:20:56.447075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.049 [2024-07-13 08:20:56.447101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.049 qpair failed and we were unable to recover it. 00:34:05.049 [2024-07-13 08:20:56.447248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.049 [2024-07-13 08:20:56.447275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.049 qpair failed and we were unable to recover it. 00:34:05.049 [2024-07-13 08:20:56.447397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.049 [2024-07-13 08:20:56.447422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.447569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.447596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.447772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.447797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.447985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.448016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.448166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.448192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.448368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.448403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.448564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.448590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.448731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.448758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.448896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.448922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.449056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.449082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.449199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.449224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.449399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.449426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.449552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.449578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.449696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.449721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.449879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.449907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.450107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.450135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.450286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.450312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.450464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.450490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.450667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.450696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.450839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.450881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.451017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.451042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.451217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.451242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.451395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.451421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.451569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.451594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.451764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.451790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.451939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.451966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.452110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.452135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.452315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.452343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.452508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.452533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.452724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.452751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.452912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.452941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.453135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.453160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.453300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.453327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.453493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.453522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.453664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.453688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.050 [2024-07-13 08:20:56.453840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.050 [2024-07-13 08:20:56.453891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.050 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.454023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.454052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.454244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.454269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.454462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.454490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.454678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.454703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.454851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.454883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.455008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.455051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.455250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.455277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.455420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.455444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.455573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.455597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.455773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.455798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.455944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.455971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.456122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.456148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.456326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.456367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.456534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.456559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.456730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.456758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.456951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.456981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.457154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.457179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.457321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.457346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.457499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.457540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.457697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.457725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.457880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.457923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.458101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.458127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.458276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.458301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.458448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.458473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.458614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.458642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.458789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.458814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.458987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.459029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.459201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.459231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.459379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.459404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.459562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.459590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.459748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.459776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.459967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.459994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.460115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.460158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.460362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.460387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.460563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.460588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.460752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.460779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.051 [2024-07-13 08:20:56.460942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.051 [2024-07-13 08:20:56.460971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.051 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.461146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.461172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.461324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.461349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.461476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.461501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.461642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.461667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.461787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.461812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.461961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.461987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.462128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.462153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.462349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.462377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.462564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.462592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.462763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.462788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.462908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.462934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.463083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.463109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.463256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.463281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.463430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.463455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.463605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.463649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.463883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.463930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.464085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.464111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.464327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.464356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.464531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.464556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.464701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.464726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.464906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.464949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.465096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.465121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.465268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.465310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.465443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.465470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.465658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.465684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.465860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.465893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.466092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.466120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.466298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.466323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.466445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.466470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.466661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.466689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.466882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.466911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.467053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.467078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.467207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.467232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.467382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.467407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.467526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.052 [2024-07-13 08:20:56.467551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.052 qpair failed and we were unable to recover it. 00:34:05.052 [2024-07-13 08:20:56.467697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.467722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.467880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.467909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.468059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.468085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.468281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.468309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.468504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.468532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.468703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.468728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.468854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.468899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.469055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.469082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.469229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.469255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.469403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.469445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.469622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.469650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.469882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.469926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.470056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.470081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.470252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.470280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.470448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.470473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.470623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.470667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.470837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.470872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.471049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.471074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.471191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.471232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.471427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.471456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.471626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.471651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.471820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.471847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.472055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.472084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.472228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.472258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.472375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.472401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.472515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.472540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.472689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.472714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.472908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.472937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.473105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.473132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.473274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.473301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.473493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.473521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.473683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.473710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.473912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.473938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.474110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.474138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.474304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.474333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.474526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.053 [2024-07-13 08:20:56.474551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.053 qpair failed and we were unable to recover it. 00:34:05.053 [2024-07-13 08:20:56.474702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.474727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.474910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.474939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.475086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.475111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.475263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.475290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.475434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.475462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.475634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.475659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.475807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.475833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.476007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.476037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.476184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.476210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.476332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.476357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.476507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.476534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.476672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.476698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.476886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.476930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.477042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.477067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.477196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.477222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.477419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.477447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.477612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.477640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.477783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.477808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.477995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.478025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.478177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.478205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.478348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.478373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.478570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.478597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.478779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.478807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.478970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.478996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.479195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.479223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.479388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.054 [2024-07-13 08:20:56.479415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.054 qpair failed and we were unable to recover it. 00:34:05.054 [2024-07-13 08:20:56.479565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.479589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.479705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.479730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.479876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.479915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.480088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.480113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.480240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.480283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.480415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.480443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.480617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.480642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.480793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.480818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.480967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.480992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.481146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.481171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.481314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.481356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.481544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.481572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.481740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.481765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.481942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.481971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.482100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.482129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.482323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.482348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.482542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.482570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.482736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.482764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.482937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.482962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.483109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.483134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.483293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.483318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.483442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.483468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.483611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.483636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.483776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.483804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.483952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.483987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.484133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.484158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.484333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.484361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.484529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.484554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.484700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.484742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.484908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.484940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.485083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.485108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.485255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.485296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.485418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.485446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.485619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.485644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.485759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.485801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.055 qpair failed and we were unable to recover it. 00:34:05.055 [2024-07-13 08:20:56.485989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.055 [2024-07-13 08:20:56.486018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.486182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.486207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.486336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.486362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.486511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.486536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.486689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.486714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.486885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.486913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.487102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.487130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.487299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.487324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.487495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.487523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.487680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.487708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.487879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.487908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.488077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.488105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.488251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.488275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.488450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.488475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.488675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.488702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.488859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.488906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.489107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.489132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.489304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.489332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.489532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.489559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.489728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.489753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.489878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.489922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.490089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.490114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.490295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.490320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.490486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.490513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.490668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.490696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.490875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.490901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.491049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.491073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.491240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.491268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.491440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.491465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.491580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.491622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.491784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.491812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.491980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.492007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.492133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.492175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.492312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.492340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.492538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.492563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.056 [2024-07-13 08:20:56.492734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.056 [2024-07-13 08:20:56.492766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.056 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.492973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.492999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.493147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.493172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.493334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.493362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.493487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.493515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.493678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.493703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.493872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.493901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.494068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.494098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.494268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.494293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.494410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.494452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.494580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.494608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.494783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.494809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.494935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.494979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.495135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.495163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.495339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.495365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.495508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.495533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.495680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.495705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.495882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.495910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.496084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.496112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.496308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.496336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.496505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.496530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.496723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.496751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.496944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.496973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.497142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.497167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.497310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.497354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.497522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.497550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.497713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.497739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.497884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.497932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.498112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.498139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.498299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.498323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.498522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.498550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.498703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.498728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.057 [2024-07-13 08:20:56.498910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.057 [2024-07-13 08:20:56.498936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.057 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.499101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.499128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.499290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.499318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.499485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.499510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.499640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.499665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.499836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.499890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.500098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.500123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.500292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.500320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.500480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.500508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.500739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.500767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.500929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.500955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.501131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.501158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.501333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.501358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.501487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.501513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.501666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.501693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.501845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.501877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.502048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.502076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.502276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.502306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.502531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.502566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.502723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.502759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.502929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.502958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.503126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.503151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.503323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.503351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.503522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.503552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.503729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.503754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.503880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.503928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.504124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.504152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.504350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.504375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.504540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.504567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.504725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.504753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.504897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.504923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.505051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.505076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.505225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.505250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.505429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.505454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.505622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.505650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.505839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.505874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.506023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.506052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.506199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.506224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.506372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.058 [2024-07-13 08:20:56.506416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.058 qpair failed and we were unable to recover it. 00:34:05.058 [2024-07-13 08:20:56.507355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.507388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.507594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.507622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.507794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.507819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.507957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.507985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.508158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.508186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.508320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.508349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.508499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.508525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.508716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.508744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.508921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.508948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.509075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.509100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.509222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.509247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.509424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.509467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.509658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.509683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.509805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.509829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.510155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.510197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.510396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.510421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.510559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.510587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.510756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.510785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.510976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.511002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.511172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.511200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.511338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.511366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.511561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.511586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.511761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.511789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.511949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.511979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.512144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.512174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.512340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.512368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.512535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.512563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.512733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.512759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.512921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.512950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.513140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.513168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.513336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.513361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.513531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.059 [2024-07-13 08:20:56.513559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.059 qpair failed and we were unable to recover it. 00:34:05.059 [2024-07-13 08:20:56.513681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.513709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.513857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.513893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.514058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.514087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.514283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.514308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.514454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.514479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.514605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.514647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.514782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.514811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.514954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.514980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.515105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.515130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.515352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.515377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.515546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.515571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.515762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.515790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.515972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.516000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.516154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.516180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.516351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.516379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.516529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.516554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.516671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.516696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.516876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.516905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.517066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.517094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.517239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.517264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.517447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.517472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.517649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.517678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.517816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.517842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.518029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.518055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.518207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.518234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.518403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.518428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.518551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.518592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.518721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.518748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.518920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.518946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.519101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.519127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.519300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.519326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.519445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.519470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.519598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.519625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.519761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.519794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.519993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.520020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.060 qpair failed and we were unable to recover it. 00:34:05.060 [2024-07-13 08:20:56.520191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.060 [2024-07-13 08:20:56.520220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.520376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.520404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.520566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.520591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.520743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.520787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.520954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.520984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.521152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.521177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.521331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.521356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.521475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.521501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.521639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.521667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.521828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.521857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.522063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.522089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.522238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.522264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.522386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.522411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.522535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.522562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.522716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.522741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.522855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.522890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.523067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.523093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.523310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.523335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.523476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.523506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.523674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.523703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.523874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.523906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.524052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.524080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.524235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.524263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.524460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.524486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.524652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.524680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.524851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.524901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.525052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.525077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.525210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.525235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.525379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.525404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.525551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.525576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.525703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.525732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.525861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.525896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.526052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.526078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.526212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.526240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.526449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.526474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.526629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.526654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.526791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.526818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.061 qpair failed and we were unable to recover it. 00:34:05.061 [2024-07-13 08:20:56.526995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.061 [2024-07-13 08:20:56.527021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.527150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.527175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.527352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.527380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.527547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.527574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.527719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.527745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.528565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.528597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.528777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.528803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.528930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.528957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.529113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.529139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.529313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.529339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.529457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.529482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.529626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.529651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.529803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.529828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.529987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.530012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.530160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.530186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.530340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.530365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.530518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.530543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.530686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.530711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.530862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.530894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.531019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.531044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.531195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.531220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.531339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.531364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.531511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.531537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.531651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.531675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.531819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.531844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.531975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.532002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.532151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.532176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.532324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.532350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.532499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.532524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.532641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.532669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.532821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.532846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.533018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.533044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.533189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.533215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.533366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.533392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.533529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.533557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.062 qpair failed and we were unable to recover it. 00:34:05.062 [2024-07-13 08:20:56.533686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.062 [2024-07-13 08:20:56.533712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.533862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.533896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.534012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.534037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.534166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.534191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.534332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.534357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.534539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.534564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.534718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.534747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.534887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.534914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.535094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.535119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.535242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.535266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.535387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.535412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.535527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.535552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.535703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.535729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.535863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.535909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.536039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.536066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.536202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.536229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.536384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.536410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.536555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.536580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.536754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.536780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.536922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.536949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.537072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.537097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.537248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.537278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.537422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.537447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.537602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.537628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.537776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.537801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.537924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.537949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.538075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.538099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.538245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.538271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.538385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.538410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.538560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.538585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.538710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.538735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.538913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.538940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.539071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.539096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.539219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.539244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.539418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.539443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.539569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.063 [2024-07-13 08:20:56.539595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.063 qpair failed and we were unable to recover it. 00:34:05.063 [2024-07-13 08:20:56.539747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.539773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.539890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.539916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.540043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.540069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.540186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.540211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.540388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.540413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.540557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.540581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.540731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.540756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.540879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.540909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.541033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.541058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.541216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.541241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.541391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.541416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.541535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.541560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.541711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.541736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.541890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.541917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.542035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.542059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.542210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.542235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.542387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.542412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.542533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.542558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.542719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.542745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.542890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.542915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.543062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.543087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.543234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.543260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.543412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.543438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.543592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.543618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.543792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.543817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.543980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.544005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.544150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.544179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.544332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.544358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.544477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.064 [2024-07-13 08:20:56.544502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.064 qpair failed and we were unable to recover it. 00:34:05.064 [2024-07-13 08:20:56.544675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.544700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.544826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.544851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.545001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.545041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.545169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.545196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.545345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.545371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.545490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.545515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.545685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.545711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.545873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.545900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.546028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.546054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.546210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.546235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.546406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.546431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.546614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.546640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.546792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.546819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.547024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.547052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.547217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.547242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.547416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.547441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.547591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.547617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.547767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.547793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.547951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.547979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.548130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.548155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.548330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.548355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.548506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.548532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.548711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.548737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.548889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.548915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.549064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.549090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.549216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.549241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.549392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.549418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.549597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.549623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.549774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.549799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.549949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.549975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.550107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.550134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.550281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.550307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.065 qpair failed and we were unable to recover it. 00:34:05.065 [2024-07-13 08:20:56.550488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.065 [2024-07-13 08:20:56.550513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.550670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.550695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.550884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.550910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.551062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.551087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.551239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.551265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.551422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.551452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.551574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.551600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.551744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.551770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.551927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.551953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.552098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.552124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.552301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.552327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.552473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.552498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.552670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.552695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.552842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.552876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.553034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.553059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.553208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.553233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.553350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.553376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.553561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.553586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.553735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.553760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.553895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.553921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.554043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.554069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.554191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.554216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.554362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.554388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.554532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.554557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.554731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.554757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.554986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.555012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.555189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.555214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.555387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.555412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.555565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.555591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.555773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.555799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.555937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.555964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.556076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.556102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.556283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.556310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.556461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.556487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.556637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.556662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.556811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.556837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.066 qpair failed and we were unable to recover it. 00:34:05.066 [2024-07-13 08:20:56.557020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.066 [2024-07-13 08:20:56.557046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.557173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.557198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.557349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.557375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.557545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.557570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.557691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.557716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.557841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.557872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.557998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.558025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.558178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.558205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.558333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.558359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.558534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.558562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.558694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.558720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.558888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.558915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.559068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.559094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.559249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.559275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.559504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.559529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.559680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.559705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.559835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.559861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.560044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.560071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.560229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.560255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.560401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.560426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.560574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.560602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.560753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.560779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.560937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.560964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.561151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.561177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.561330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.561355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.561501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.561526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.561673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.561699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.561849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.561881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.562039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.562066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.562218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.562244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.562428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.562454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.562605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.562631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.562783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.562809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.562957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.562983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.563132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.563157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.067 [2024-07-13 08:20:56.563306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.067 [2024-07-13 08:20:56.563333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.067 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.563512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.563537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.563685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.563711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.563856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.563890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.564042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.564068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.564191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.564216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.564364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.564389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.564537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.564562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.564678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.564703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.564825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.564850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.565010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.565036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.565192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.565218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.565375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.565400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.565549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.565575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.565720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.565749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.565910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.565937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.566091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.566117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.566271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.566297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.566474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.566500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.566689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.566715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.566838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.566864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.567002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.567028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.567174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.567199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.567346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.567371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.567499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.567526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.567704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.567730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.567878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.567904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.568062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.568087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.568246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.568272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.568422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.568448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.568627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.568652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.568779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.568805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.568949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.568975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.569126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.569151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.569306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.569331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.569481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.068 [2024-07-13 08:20:56.569506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.068 qpair failed and we were unable to recover it. 00:34:05.068 [2024-07-13 08:20:56.569635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.569662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.569806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.569831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.569967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.569993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.570143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.570169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.570346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.570372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.570499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.570525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.570709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.570734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.570887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.570913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.571077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.571103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.571228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.571253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.571381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.571408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.571532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.571559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.571734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.571759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.571877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.571903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.572055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.572080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.572259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.572285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.572409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.572436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.572617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.572643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.572796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.572826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.572987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.573014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.573159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.573185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.573342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.573367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.573508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.573534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.573658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.573683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.573835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.573861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.574015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.574041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.574164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.574189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.574341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.574366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.574513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.574538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.574688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.574715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.574834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.069 [2024-07-13 08:20:56.574860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.069 qpair failed and we were unable to recover it. 00:34:05.069 [2024-07-13 08:20:56.575059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.575085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.575219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.575245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.575365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.575391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.575563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.575588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.575736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.575761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.575915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.575942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.576092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.576118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.576272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.576297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.576485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.576510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.576686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.576712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.576839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.576872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.577018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.577043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.577192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.577219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.577399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.577424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.577574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.577601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.577774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.577799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.577927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.577953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.578129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.578155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.578330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.578355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.578534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.578559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.578715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.578740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.578967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.578992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.579142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.579168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.579346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.579372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.579517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.579543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.579667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.579692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.579816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.579843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.580028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.580054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.580186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.580212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.580336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.580361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.580512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.580538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.580687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.580712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.070 [2024-07-13 08:20:56.580834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.070 [2024-07-13 08:20:56.580860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.070 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.581024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.581050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.581169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.581195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.581377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.581403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.581530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.581556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.581704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.581729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.581961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.581987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.582173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.582199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.582345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.582371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.582532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.582559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.582707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.582732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.582891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.582917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.583097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.583124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.583276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.583302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.583477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.583502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.583636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.583662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.583814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.583840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.584074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.584100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.584285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.584311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.584455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.584481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.584654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.584680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.584832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.584857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.585019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.585049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.585193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.585218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.585399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.585424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.585573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.585599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.585826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.585852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.586040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.586067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.586217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.586242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.586358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.586384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.586510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.586536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.586690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.586716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.586881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.586907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.587088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.587114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.587259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.587284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.587437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.071 [2024-07-13 08:20:56.587463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.071 qpair failed and we were unable to recover it. 00:34:05.071 [2024-07-13 08:20:56.587647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.587673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.587822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.587847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.588016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.588042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.588194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.588220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.588393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.588418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.588566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.588591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.588741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.588767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.588947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.588974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.589128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.589154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.589305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.589331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.589511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.589536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.589763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.589789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.589943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.589969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.590119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.590146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.590305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.590330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.590510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.590535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.590710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.590735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.590910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.590936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.591091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.591117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.591263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.591289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.591435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.591461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.591614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.591639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.591817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.591843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.592024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.592050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.592174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.592199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.592353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.592378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.592530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.592559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.592683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.592708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.592887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.592914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.593058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.593084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.593205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.593230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.593380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.593406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.593529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.593554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.593732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.593758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.593931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.593957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.072 [2024-07-13 08:20:56.594087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.072 [2024-07-13 08:20:56.594112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.072 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.594241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.594266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.594384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.594409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.594554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.594581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.594734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.594760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.594911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.594938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.595058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.595084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.595239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.595264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.595442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.595467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.595613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.595639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.595818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.595843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.595971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.595998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.596120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.596145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.596300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.596326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.596476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.596501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.596653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.596679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.596834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.596860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.596991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.597016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.597172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.597197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.597349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.597374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.597546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.597572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.597718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.597744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.597975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.598001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.598175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.598201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.598347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.598372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.598494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.598520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.598674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.598701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.598856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.598897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.599046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.599071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.599244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.599270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.599443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.599468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.599618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.599648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.599823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.599848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.600009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.600035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.600207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.600233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.600381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.600407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.600540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.073 [2024-07-13 08:20:56.600567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.073 qpair failed and we were unable to recover it. 00:34:05.073 [2024-07-13 08:20:56.600750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.600775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.600908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.600934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.601084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.601109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.601259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.601285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.601435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.601460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.601608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.601633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.601788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.601813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.601963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.601990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.602175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.602201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.602358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.602384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.602531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.602557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.602680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.602706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.602855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.602888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.603021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.603048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.603190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.603216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.603355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.603381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.603556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.603581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.603737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.603762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.603919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.603946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.604122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.604148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.604324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.604350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.604538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.604564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.604715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.604741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.604888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.604914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.605076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.605102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.605249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.605275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.605423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.605448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.605632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.605658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.074 qpair failed and we were unable to recover it. 00:34:05.074 [2024-07-13 08:20:56.605808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.074 [2024-07-13 08:20:56.605833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.606009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.606036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.606161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.606187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.606336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.606362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.606509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.606536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.606687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.606713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.606875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.606905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.607049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.607075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.607199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.607224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.607384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.607410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.607590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.607615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.607772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.607798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.607957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.607984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.608130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.608156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.608305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.608330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.608482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.608508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.608631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.608658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.608788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.608814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.608962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.608989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.609138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.609165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.609350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.609376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.609527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.609552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.609684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.609709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.609890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.609916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.610068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.610094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.610218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.610244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.610367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.610392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.610541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.610567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.610718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.610744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.610872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.610898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.611051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.611076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.611230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.611256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.611391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.611417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.611568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.611594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.611743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.611768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.611901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.611928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.612057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.612082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.612237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.612262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.075 [2024-07-13 08:20:56.612418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.075 [2024-07-13 08:20:56.612443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.075 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.612597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.612622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.612743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.612768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.612902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.612928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.613099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.613124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.613276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.613301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.613423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.613449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.613594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.613619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.613805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.613834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.613971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.613998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.614145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.614171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.614296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.614322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.614439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.614465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.614612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.614638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.614788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.614814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.614959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.614985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.615160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.615185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.615308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.615334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.615491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.615517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.615649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.615674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.615829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.615854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.616017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.616043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.616279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.616305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.616434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.616461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.616640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.616665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.616818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.616843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.617034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.617061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.617191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.617216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.617385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.617411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.617535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.617561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.617710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.617736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.617884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.617911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.618039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.076 [2024-07-13 08:20:56.618064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.076 qpair failed and we were unable to recover it. 00:34:05.076 [2024-07-13 08:20:56.618240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.618265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.618413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.618439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.618622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.618647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.618801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.618827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.619014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.619041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.619169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.619195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.619371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.619396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.619546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.619572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.619718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.619743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.619918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.619944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.620091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.620116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.620260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.620286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.620412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.620438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.620587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.620613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.620758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.620784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.620932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.620961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.621109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.621135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.621311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.621336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.621498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.621524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.621698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.621723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.621890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.621916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.622048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.622075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.622259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.622284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.622432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.622458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.622634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.622660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.622833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.622858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.623043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.623069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.623222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.623248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.623405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.623430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.623585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.623610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.623756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.623781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.623900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.623926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.077 qpair failed and we were unable to recover it. 00:34:05.077 [2024-07-13 08:20:56.624051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.077 [2024-07-13 08:20:56.624080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.624233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.624259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.624435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.624460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.624644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.624670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.624796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.624821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.624976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.625002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.625175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.625200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.625353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.625378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.625606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.625631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.625803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.625828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.626004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.626030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.626174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.626199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.626325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.626351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.626514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.626539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.626713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.626739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.626899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.626926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.627045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.627070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.627184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.627210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.627331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.627356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.627481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.627507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.627683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.627709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.627936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.627962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.628141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.628166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.628339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.628369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.628543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.628569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.628722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.628747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.628922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.628948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.629117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.629143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.629297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.629323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.629550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.629576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.629708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.629735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.629886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.629912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.630084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.630110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.630285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.630310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.630540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.630566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.630683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.630708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.630887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.078 [2024-07-13 08:20:56.630914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.078 qpair failed and we were unable to recover it. 00:34:05.078 [2024-07-13 08:20:56.631049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.631076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.631254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.631279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.631507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.631533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.631711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.631737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.631858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.631891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.632042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.632068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.632221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.632246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.632427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.632452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.632679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.632705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.632863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.632895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.633046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.633072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.633191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.633217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.633371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.633396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.633547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.633573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.633744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.633770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.633946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.633973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.634121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.634146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.634293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.634319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.634498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.634524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.634673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.634698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.634862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.634894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.635046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.635071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.635217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.635243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.635392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.635417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.635574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.635599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.635723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.635748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.635901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.635931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.079 qpair failed and we were unable to recover it. 00:34:05.079 [2024-07-13 08:20:56.636086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.079 [2024-07-13 08:20:56.636113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.636290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.636315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.636438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.636463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.636619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.636645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.636764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.636789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.636935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.636961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.637139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.637164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.637311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.637336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.637479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.637504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.637629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.637655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.637829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.637855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.638045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.638071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.638187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.638213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.638386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.638412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.638559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.638584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.638702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.638727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.638911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.638937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.639092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.639118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.639238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.639264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.639383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.639408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.639557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.639582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.639734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.639759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.639907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.639933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.640079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.640104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.640278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.640303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.640453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.640478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.640639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.640666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.640794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.640820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.641003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.641029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.641203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.641229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.641378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.641403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.641558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.641583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.641727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.641753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.641912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.641938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.642118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.642143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.642268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.642294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.080 [2024-07-13 08:20:56.642424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.080 [2024-07-13 08:20:56.642450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.080 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.642622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.642647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.642800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.642827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.642984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.643015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.643189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.643215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.643363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.643388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.643537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.643562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.643732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.643758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.643913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.643940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.644085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.644110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.644256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.644281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.644410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.644435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.644588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.644613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.644800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.644825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.644955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.644982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.645111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.645137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.645289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.645315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.645466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.645492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.645649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.645675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.645823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.645849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.646033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.646058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.646236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.646262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.646436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.646461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.646637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.646663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.646816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.646841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.646967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.646994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.647122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.647147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.647298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.647324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.647445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.647470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.647621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.647647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.647826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.647852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.647989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.648015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.648170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.648196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.648379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.081 [2024-07-13 08:20:56.648405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.081 qpair failed and we were unable to recover it. 00:34:05.081 [2024-07-13 08:20:56.648534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.648559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.648707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.648733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.648905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.648931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.649084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.649109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.649260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.649285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.649435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.649461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.649615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.649641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.649795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.649821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.649963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.649990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.650108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.650137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.650292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.650318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.650448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.650473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.650619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.650644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.650820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.650845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.650975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.651001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.651147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.651172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.651334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.651360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.651482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.651508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.651638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.651665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.651806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.651832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.651993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.652019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.652246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.652271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.652399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.652426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.652580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.652606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.652775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.652801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.652945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.652972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.653123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.653148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.653295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.653320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.653496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.653522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.653646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.653672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.653819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.653844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.654013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.654053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.654214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.654241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.082 qpair failed and we were unable to recover it. 00:34:05.082 [2024-07-13 08:20:56.654417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.082 [2024-07-13 08:20:56.654444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.654592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.654618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.654783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.654809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.654993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.655021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.655197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.655223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.655382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.655410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.655558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.655585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.655739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.655766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.655959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.655986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.656107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.656133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.656274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.656301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.656453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.656480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.656656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.656682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.656811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.656837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.656969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.656996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.657145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.657171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.657323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.657354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.657515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.657542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.657721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.657747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.657901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.657928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.658085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.658112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.658265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.658291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.658436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.658462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.658639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.658665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.658816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.658843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.659005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.659031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.659162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.659189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.659337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.659362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.659481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.659507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.659663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.659690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.659842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.659876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.660029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.660055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.660210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.660236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.660384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.660410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.660535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.660563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.660716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.660743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.660873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.083 [2024-07-13 08:20:56.660899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.083 qpair failed and we were unable to recover it. 00:34:05.083 [2024-07-13 08:20:56.661081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.661107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.661286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.661311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.661461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.661487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.661637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.661664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.661857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.661890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.662010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.662036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.662174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.662201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.662435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.662460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.662612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.662638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.662784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.662809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.662938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.662964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.663090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.663116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.663266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.663292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.663473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.663499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.663727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.663753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.663926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.663967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.664126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.664153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.664304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.664331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.664507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.664534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.664714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.664745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.664923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.664951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.665136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.665163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.665312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.665338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.665483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.665509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.665660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.665686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.665869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.665897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.666052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.666077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.666227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.666253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.666429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.666454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.666582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.666609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.666760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.666785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.666935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.666962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.667139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.084 [2024-07-13 08:20:56.667165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.084 qpair failed and we were unable to recover it. 00:34:05.084 [2024-07-13 08:20:56.667318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.667343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.667501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.667527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.667705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.667732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.667885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.667912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.668042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.668069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.668202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.668228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.668376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.668403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.668554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.668579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.668728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.668754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.668907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.668934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.669085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.669110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.669269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.669296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.669424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.669451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.669623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.669662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.669849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.669889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.670035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.670080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.670250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.670293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.670467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.670511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.670663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.670689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.670864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.670898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.671031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.671074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.671238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.671283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.671457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.671503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.671657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.671683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.671860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.671894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.672035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.672078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.672282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.672325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.672527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.672570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.672727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.672752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.672880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.672906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.673052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.673094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.673299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.673342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.673506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.673549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.673708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.673734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.673863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.673894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.674090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.674135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.674307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.085 [2024-07-13 08:20:56.674352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.085 qpair failed and we were unable to recover it. 00:34:05.085 [2024-07-13 08:20:56.674521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.674563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.674716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.674743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.674892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.674919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.675086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.675128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.675301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.675343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.675545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.675573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.675767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.675792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.675930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.675974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.676170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.676199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.676386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.676428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.676576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.676601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.676754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.676780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.676953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.676998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.677133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.677176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.677326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.677369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.677545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.677571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.677694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.677724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.677881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.677907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.678074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.678118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.678314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.678356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.678516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.678542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.678692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.678718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.678934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.678978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.679115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.679158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.679327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.679377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.679526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.679552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.679705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.679731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.679848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.679880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.680090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.680133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.680309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.680353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.680509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.680535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.680711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.680737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.680871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.680899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.086 qpair failed and we were unable to recover it. 00:34:05.086 [2024-07-13 08:20:56.681066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.086 [2024-07-13 08:20:56.681110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.681262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.681287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.681461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.681487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.681610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.681636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.681769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.681794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.681991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.682035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.682207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.682250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.682421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.682464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.682586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.682611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.682792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.682817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.683002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.683046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.683220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.683268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.683440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.683483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.683636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.683662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.683788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.683815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.684025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.684068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.684245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.684290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.684491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.684535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.684719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.684744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.684893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.684919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.685123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.685167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.685344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.685387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.685584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.685625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.685802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.685831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.686010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.686053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.686223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.686251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.686436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.686482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.686636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.686663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.686849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.686900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.687056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.687086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.687228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.687257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.687402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.687431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.687600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.687628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.687791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.687818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.688000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.087 [2024-07-13 08:20:56.688027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.087 qpair failed and we were unable to recover it. 00:34:05.087 [2024-07-13 08:20:56.688168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.688198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.688383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.688411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.688670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.688698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.688853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.688888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.689010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.689037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.689228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.689256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.689419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.689447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.689585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.689613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.689772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.689797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.689952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.689979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.690101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.690126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.690337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.690362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.690506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.690531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.690700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.690728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.690920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.690946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.691075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.691104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.691224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.691249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.691368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.691393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.691540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.691565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.691711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.691736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.691912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.691946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.692073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.692099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.692275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.692301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.692476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.692504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.692665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.692693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.692858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.692892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.693086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.693111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.693239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.693267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.693396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.693424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.693620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.693648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.693807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.693835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.693981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.694007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.694179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.694204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.694339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.694367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.694503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.694531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.694676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.088 [2024-07-13 08:20:56.694702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.088 qpair failed and we were unable to recover it. 00:34:05.088 [2024-07-13 08:20:56.694881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.694907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.695048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.695073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.695239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.695267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.695427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.695455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.695595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.695622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.695801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.695827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.695956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.695983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.696107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.696133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.696275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.696301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.696470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.696497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.696658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.696686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.696860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.696895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.697049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.697075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.697225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.697251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.697449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.697477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.697644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.697672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.697861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.697910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.698059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.698085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.698208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.698233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.698354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.698379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.698593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.698648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.698807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.698834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.699020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.699047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.699193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.699237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.699374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.699416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.699586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.699629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.699782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.699808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.699938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.699966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.700085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.700110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.700279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.700309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.089 qpair failed and we were unable to recover it. 00:34:05.089 [2024-07-13 08:20:56.700478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.089 [2024-07-13 08:20:56.700506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.700661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.700689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.700822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.700847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.701002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.701028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.701178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.701206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.701386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.701413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.701556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.701584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.701744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.701771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.701942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.701967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.702107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.702132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.702278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.702305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.702489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.702517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.702714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.702741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.702895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.702938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.703084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.703110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.703305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.703332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.703472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.703497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.703674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.703706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.703873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.703922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.704075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.704101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.704238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.704265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.704404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.704431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.704599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.704627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.704785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.704812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.705013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.705039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.705182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.705207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.705370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.705397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.705572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.705600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.705764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.705792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.705953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.705978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.706121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.706163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.706363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.706391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.706578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.706605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.706760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.706788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.706934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.706960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.707084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.707109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.707255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.707280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.707450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.090 [2024-07-13 08:20:56.707477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.090 qpair failed and we were unable to recover it. 00:34:05.090 [2024-07-13 08:20:56.707634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.707662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.707816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.707844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.708007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.708033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.708182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.708207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.708358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.708383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.708551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.708579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.708741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.708773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.708948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.708974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.709127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.709152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.709345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.709373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.709500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.709528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.709688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.709716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.709904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.709944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.710104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.710131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.710298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.710342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.710525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.710552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.710727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.710753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.710874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.710900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.711100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.711142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.711312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.711355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.711501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.711544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.711691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.711717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.711920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.711965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.712137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.712180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.712355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.712397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.712553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.712579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.712694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.712720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.712892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.712936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.713135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.713178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.713350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.713392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.713538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.713563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.713711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.713737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.713876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.713902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.714102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.091 [2024-07-13 08:20:56.714150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.091 qpair failed and we were unable to recover it. 00:34:05.091 [2024-07-13 08:20:56.714320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.714363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.714546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.714571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.714750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.714776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.714949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.714993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.715139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.715168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.715398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.715441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.715568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.715593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.715744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.715770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.715914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.715941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.716095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.716121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.716308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.716334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.716505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.716536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.716735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.716761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.716892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.716938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.717129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.717157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.717321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.717349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.717509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.717537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.717727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.717754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.717955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.717981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.718100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.718124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.718297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.718325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.718464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.718492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.718683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.718711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.718881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.718907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.719082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.719107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.719256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.719284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.719422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.719469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.719629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.719657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.719822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.719850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.720011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.720051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.720208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.720235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.720379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.720427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.720623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.720666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.720813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.720838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.092 qpair failed and we were unable to recover it. 00:34:05.092 [2024-07-13 08:20:56.720991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.092 [2024-07-13 08:20:56.721018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.721190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.721215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.721380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.721422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.721600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.721644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.721795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.721823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.721964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.721990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.722197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.722225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.722413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.722441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.722603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.722631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.722795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.722824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.723021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.723047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.723207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.723235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.723397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.723425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.723609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.723637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.723799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.723827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.724020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.724045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.724189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.724215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.724382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.724410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.724601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.724630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.724839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.724883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.725075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.725101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.725276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.725303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.725464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.725492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.725636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.725664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.725822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.725850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.726035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.726061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.726206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.726248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.726405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.726433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.726660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.726688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.726853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.726889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.727059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.727086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.727263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.727288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.727484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.727512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.727724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.727749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.727904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.727930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.728054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.728081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.728261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.728289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.728452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.093 [2024-07-13 08:20:56.728479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.093 qpair failed and we were unable to recover it. 00:34:05.093 [2024-07-13 08:20:56.728609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.728637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.728828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.728856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.729029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.729068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.729253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.729296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.729477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.729519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.729693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.729736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.729914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.729940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.730085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.730128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.730333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.730381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.730579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.730626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.730777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.730801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.730975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.731020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.731196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.731240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.731409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.731452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.731623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.731666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.731783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.731809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.731976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.732019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.732158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.732205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.732378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.732421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.732541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.732567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.732718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.732744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.732908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.732937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.733131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.733173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.733343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.733386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.733563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.733588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.733772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.733797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.733965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.734008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.734177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.734220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.734407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.734434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.734569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.734595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.094 [2024-07-13 08:20:56.734744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.094 [2024-07-13 08:20:56.734770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.094 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.734944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.734975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.735144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.735172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.735311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.735339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.735476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.735521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.735710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.735738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.735893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.735921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.736093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.736137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.736309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.736351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.736530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.736576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.736709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.736735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.736891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.736917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.737067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.737109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.737314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.737357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.737510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.737536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.737709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.737734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.737884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.737910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.738095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.738139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.738337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.738370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.738541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.738566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.738717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.738743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.738860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.738894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.739103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.739147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.739293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.739336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.739485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.739510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.739685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.739711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.739860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.739894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.740021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.740048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.740223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.740267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.740445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.740488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.740663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.740687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.740838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.740862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.741081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.741110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.741303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.741347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.741528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.741571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.741746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.741772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.741939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.741983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.742166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.742208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.742382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.742426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.742624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.742652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.095 qpair failed and we were unable to recover it. 00:34:05.095 [2024-07-13 08:20:56.742807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.095 [2024-07-13 08:20:56.742831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.743012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.743056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.743240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.743266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.743413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.743442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.743637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.743663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.743845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.743878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.744034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.744075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.744255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.744299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.744484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.744526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.744677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.744703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.744846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.744876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.745024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.745066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.745237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.745279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.745481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.745524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.745652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.745678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.745837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.745862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.745991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.746017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.746192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.746218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.746404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.746429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.746611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.746654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.746831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.746856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.747021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.747051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.747218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.747262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.747443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.747484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.747661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.747686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.747861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.747896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.748074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.748118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.748288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.748331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.748476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.748503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.748669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.748693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.748877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.748904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.749106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.749149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.749299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.749341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.749542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.749586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.749733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.749757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.749938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.749982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.750151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.750194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.750369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.750412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.750557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.750600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.750775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.750800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.750977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.751019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.751219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.751262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.751434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.751477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.751625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.751652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.096 [2024-07-13 08:20:56.751800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.096 [2024-07-13 08:20:56.751826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.096 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.752028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.752076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.752242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.752286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.752444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.752470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.752624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.752649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.752796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.752821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.753026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.753071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.753244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.753287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.753492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.753535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.753713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.753738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.753933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.753977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.754136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.754179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.754356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.754398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.754550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.754576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.754726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.754752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.754882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.754909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.755112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.755141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.755333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.755379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.755585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.755629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.755803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.755829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.755994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.756039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.756209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.756252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.756395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.756437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.756621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.756647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.756805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.756830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.757040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.757083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.757246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.757272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.757430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.757473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.757639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.757665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.757840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.757874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.758017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.758060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.758233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.758276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.758475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.758503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.758671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.758696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.097 [2024-07-13 08:20:56.758815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.097 [2024-07-13 08:20:56.758839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.097 qpair failed and we were unable to recover it. 00:34:05.378 [2024-07-13 08:20:56.759021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.378 [2024-07-13 08:20:56.759066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.378 qpair failed and we were unable to recover it. 00:34:05.378 [2024-07-13 08:20:56.759213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.378 [2024-07-13 08:20:56.759255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.378 qpair failed and we were unable to recover it. 00:34:05.378 [2024-07-13 08:20:56.759457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.378 [2024-07-13 08:20:56.759501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.378 qpair failed and we were unable to recover it. 00:34:05.378 [2024-07-13 08:20:56.759626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.378 [2024-07-13 08:20:56.759652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.378 qpair failed and we were unable to recover it. 00:34:05.378 [2024-07-13 08:20:56.759829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.378 [2024-07-13 08:20:56.759854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.378 qpair failed and we were unable to recover it. 00:34:05.378 [2024-07-13 08:20:56.760032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.378 [2024-07-13 08:20:56.760075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.378 qpair failed and we were unable to recover it. 00:34:05.378 [2024-07-13 08:20:56.760279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.378 [2024-07-13 08:20:56.760326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.378 qpair failed and we were unable to recover it. 00:34:05.378 [2024-07-13 08:20:56.760527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.378 [2024-07-13 08:20:56.760570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.378 qpair failed and we were unable to recover it. 00:34:05.378 [2024-07-13 08:20:56.760693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.378 [2024-07-13 08:20:56.760718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.378 qpair failed and we were unable to recover it. 00:34:05.378 [2024-07-13 08:20:56.760879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.378 [2024-07-13 08:20:56.760905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.378 qpair failed and we were unable to recover it. 00:34:05.378 [2024-07-13 08:20:56.761077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.378 [2024-07-13 08:20:56.761119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.378 qpair failed and we were unable to recover it. 00:34:05.378 [2024-07-13 08:20:56.761289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.378 [2024-07-13 08:20:56.761334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.378 qpair failed and we were unable to recover it. 00:34:05.378 [2024-07-13 08:20:56.761507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.378 [2024-07-13 08:20:56.761551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.378 qpair failed and we were unable to recover it. 00:34:05.378 [2024-07-13 08:20:56.761699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.378 [2024-07-13 08:20:56.761724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.378 qpair failed and we were unable to recover it. 00:34:05.378 [2024-07-13 08:20:56.761878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.378 [2024-07-13 08:20:56.761904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.378 qpair failed and we were unable to recover it. 00:34:05.378 [2024-07-13 08:20:56.762081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.762125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.762300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.762344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.762508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.762551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.762700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.762724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.762874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.762900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.763106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.763134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.763327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.763373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.763509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.763550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.763699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.763724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.763847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.763886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.764028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.764076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.764277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.764319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.764497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.764522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.764676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.764702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.764828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.764855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.765012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.765038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.765217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.765243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.765413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.765456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.765643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.765669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.765818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.765844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.766020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.766063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.766211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.766254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.766452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.766480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.766646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.766671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.766820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.766844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.767015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.767059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.767229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.767259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.767474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.767502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.767663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.767688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.767861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.767893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.768096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.768138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.768306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.768354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.768516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.768559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.768714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.768740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.768894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.768920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.769122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.769165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.769363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.769406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.769563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.769607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.769757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.769783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.769971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.770014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.379 qpair failed and we were unable to recover it. 00:34:05.379 [2024-07-13 08:20:56.770185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.379 [2024-07-13 08:20:56.770227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.770399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.770441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.770572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.770597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.770743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.770767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.770964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.771007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.771156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.771199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.771341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.771369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.771538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.771563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.771738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.771763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.771894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.771920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.772117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.772160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.772323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.772367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.772519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.772544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.772697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.772723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.772920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.772964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.773145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.773187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.773388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.773416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.773579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.773604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.773756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.773781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.773976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.774022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.774196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.774239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.774410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.774452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.774605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.774631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.774787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.774812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.774991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.775038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.775236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.775280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.775463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.775506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.775681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.775706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.775853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.775885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.776052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.776094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.776257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.776298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.776507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.776554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.776707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.776733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.776863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.776897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.777103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.777131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.777349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.777393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.777582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.777607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.777756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.777780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.777950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.777993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.778141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.778168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.380 [2024-07-13 08:20:56.778319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.380 [2024-07-13 08:20:56.778345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.380 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.778495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.778521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.778701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.778726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.778876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.778901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.779046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.779071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.779247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.779290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.779466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.779491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.779635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.779660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.779795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.779819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.779988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.780033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.780245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.780287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.780433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.780477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.780633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.780659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.780778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.780804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.781013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.781042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.781204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.781247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.781395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.781420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.781573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.781600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.781751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.781777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.781946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.781990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.782136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.782178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.782323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.782366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.782541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.782566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.782690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.782714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.782905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.782932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.783056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.783081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.783231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.783256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.783457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.783499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.783632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.783658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.783804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.783829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.783989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.784014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.784168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.784201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.784326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.784352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.784531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.784556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.784732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.784757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.784876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.784902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.785054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.785079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.785247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.785275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.785414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.785441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.785560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.785585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.381 qpair failed and we were unable to recover it. 00:34:05.381 [2024-07-13 08:20:56.785734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.381 [2024-07-13 08:20:56.785759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.785892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.785918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.786128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.786171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.786318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.786361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.786542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.786566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.786734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.786760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.786926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.786955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.787148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.787190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.787364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.787406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.787580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.787605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.787737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.787762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.787955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.787999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.788170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.788213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.788395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.788438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.788562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.788587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.788715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.788740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.788861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.788893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.789097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.789139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.789313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.789356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.789505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.789530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.789696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.789721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.789842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.789885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.790095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.790140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.790320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.790361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.790504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.790546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.790710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.790736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.790877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.790903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.791094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.791120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.791293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.791337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.791459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.791485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.791636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.791661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.791806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.791835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.792013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.792057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.792221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.792264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.382 [2024-07-13 08:20:56.792437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.382 [2024-07-13 08:20:56.792479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.382 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.792604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.792629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.792783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.792808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.793012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.793056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.793239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.793282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.793453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.793496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.793673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.793698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.793849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.793893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.794095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.794136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.794319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.794362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.794528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.794571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.794702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.794727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.794850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.794884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.795061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.795104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.795305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.795348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.795519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.795561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.795709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.795734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.795901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.795940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.796128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.796173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.796384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.796426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.796580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.796607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.796762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.796788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.796960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.797003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.797146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.797192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.797340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.797384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.797549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.797592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.797769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.797794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.797989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.798037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.798183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.798227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.798373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.798416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.798594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.798620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.798800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.798826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.798972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.799016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.799222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.799264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.799405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.799447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.799571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.799596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.799769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.799795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.799965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.800011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.800185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.800230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.800372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.800416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.800589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.800614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.383 qpair failed and we were unable to recover it. 00:34:05.383 [2024-07-13 08:20:56.800735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.383 [2024-07-13 08:20:56.800759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.800959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.801002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.801153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.801179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.801356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.801382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.801509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.801534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.801710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.801734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.801854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.801887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.802131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.802174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.802344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.802387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.802572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.802597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.802754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.802779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.802944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.802973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.803162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.803209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.803405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.803433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.803603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.803629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.803804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.803828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.804037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.804081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.804225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.804268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.804427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.804471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.804599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.804624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.804802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.804827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.804976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.805020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.805194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.805239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.805418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.805445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.805639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.805664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.805789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.805814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.805960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.806004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.806202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.806230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.806420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.806468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.806645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.806669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.806785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.806811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.806975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.807018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.807162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.807204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.807345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.807388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.807561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.807586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.807704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.807728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.807883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.807912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.808087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.808130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.808267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.808309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.808475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.808517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.808668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.384 [2024-07-13 08:20:56.808693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.384 qpair failed and we were unable to recover it. 00:34:05.384 [2024-07-13 08:20:56.808872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.808898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.809069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.809113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.809312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.809341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.809524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.809567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.809746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.809772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.809888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.809914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.810080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.810122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.810269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.810312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.810509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.810538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.810733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.810758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.810925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.810969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.811177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.811218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.811419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.811447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.811606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.811631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.811775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.811800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.811946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.811989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.812189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.812232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.812376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.812419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.812568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.812594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.812743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.812768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.812965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.813009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.813179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.813223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.813422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.813451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.813584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.813609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.813762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.813787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.813963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.814007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.814183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.814226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.814424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.814453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.814611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.814636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.814759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.814784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.814953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.814997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.815142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.815186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.815353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.815396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.815522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.815547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.815682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.815707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.815857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.815892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.816063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.816106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.816278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.816321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.816466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.816509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.816680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.816705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.385 [2024-07-13 08:20:56.816856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.385 [2024-07-13 08:20:56.816891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.385 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.817040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.817068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.817248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.817289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.817465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.817510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.817659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.817683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.817860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.817894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.818097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.818125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.818285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.818327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.818497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.818545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.818678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.818703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.818846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.818880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.819061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.819090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.819277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.819319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.819494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.819537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.819686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.819711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.819883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.819910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.820093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.820137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.820288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.820331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.820529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.820572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.820722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.820747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.820871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.820897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.821071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.821117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.821295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.821337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.821538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.821580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.821753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.821778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.821951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.821994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.822144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.822188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.822360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.822402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.822553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.822579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.822731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.822758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.822932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.822961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.823145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.823187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.823368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.823409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.823562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.823587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.823763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.823789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.823983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.824032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.824244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.824286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.824430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.824473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.824653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.824678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.824854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.824885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.825065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.825110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.386 qpair failed and we were unable to recover it. 00:34:05.386 [2024-07-13 08:20:56.825284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.386 [2024-07-13 08:20:56.825328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.825468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.825510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.825688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.825713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.825871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.825897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.826051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.826079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.826270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.826313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.826485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.826528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.826703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.826727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.826884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.826911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.827091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.827119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.827337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.827381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.827544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.827591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.827762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.827786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.827926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.827971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.828171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.828214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.828392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.828439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.828638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.828666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.828831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.828855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.829038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.829081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.829255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.829298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.829470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.829513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.829643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.829670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.829851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.829883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.830005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.830031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.830198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.830241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.830406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.830450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.830647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.830689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.830841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.830873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.831051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.831093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.831268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.831312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.831496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.831523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.831645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.831672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.831824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.831851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.832063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.832106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.832250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.832297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.387 [2024-07-13 08:20:56.832468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.387 [2024-07-13 08:20:56.832515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.387 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.832664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.832690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.832814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.832839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.832975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.833001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.833150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.833176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.833302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.833327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.833506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.833532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.833686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.833712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.833864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.833898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.834023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.834048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.834189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.834236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.834410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.834436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.834560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.834585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.834735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.834760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.834954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.834997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.835129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.835154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.835306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.835332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.835461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.835486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.835638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.835664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.835813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.835839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.836023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.836049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.836196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.836237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.836444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.836487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.836661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.836687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.836814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.836841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.837025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.837068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.837286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.837327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.837517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.837552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.837839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.837910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.838096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.838130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.838318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.838363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.838543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.838587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.838764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.838790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.838935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.838965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.839124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.839168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.839313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.839357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.839641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.839701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.839848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.839889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.840035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.840061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.840230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.840276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.840477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.388 [2024-07-13 08:20:56.840518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.388 qpair failed and we were unable to recover it. 00:34:05.388 [2024-07-13 08:20:56.840643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.840669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.840819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.840845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.840999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.841042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.841187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.841230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.841365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.841393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.841537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.841563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.841743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.841768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.841933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.841977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.842146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.842189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.842367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.842393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.842566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.842592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.842744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.842777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.842974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.843018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.843198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.843241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.843414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.843458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.843586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.843611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.843789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.843815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.843987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.844032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.844205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.844250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.844448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.844477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.844665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.844691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.844873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.844898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.845096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.845139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.845280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.845307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.845488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.845532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.845660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.845692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.845870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.845920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.846109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.846142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.846318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.846350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.846561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.846593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.846806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.846838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.847065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.847093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.847335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.847367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.847606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.847638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.847850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.847912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.848086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.848116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.848332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.848389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.848718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.848777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.848977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.849004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.849206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.849234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.849506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.389 [2024-07-13 08:20:56.849556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.389 qpair failed and we were unable to recover it. 00:34:05.389 [2024-07-13 08:20:56.849841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.849914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.850063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.850088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.850261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.850290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.850506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.850557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.850891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.850942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.851102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.851128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.851310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.851338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.851576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.851639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.851832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.851860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.852036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.852062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.852234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.852263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.852594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.852649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.852840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.852876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.853020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.853046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.853220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.853248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.853565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.853617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.853776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.853805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.853971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.853998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.854169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.854198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.854467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.854519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.854792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.854842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.855033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.855061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.855201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.855231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.855427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.855455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.855772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.855831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.855998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.856025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.856172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.856201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.856472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.856524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.856728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.856753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.856943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.856969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.857117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.857158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.857398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.857451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.857633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.857685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.857817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.857847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.858036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.858062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.858235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.858263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.858420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.858450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.858644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.858673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.858859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.858911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.859060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.859086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.390 qpair failed and we were unable to recover it. 00:34:05.390 [2024-07-13 08:20:56.859234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.390 [2024-07-13 08:20:56.859260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.859441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.859470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.859657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.859706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.859844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.859884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.860058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.860084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.860285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.860313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.860449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.860477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.860636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.860664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.860878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.860904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.861061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.861087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.861283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.861312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.861483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.861511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.861705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.861731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.861924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.861953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.862097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.862125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.862291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.862321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.862490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.862515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.862645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.862671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.862829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.862855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.863032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.863061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.863235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.863261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.863408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.863433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.863608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.863637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.863828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.863857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.864045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.864075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.864205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.864232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.864438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.864467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.864628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.864657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.864848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.864879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.865076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.865105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.865302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.865330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.865485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.865510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.865687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.865713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.865862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.865893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.866048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.866074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.866196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.391 [2024-07-13 08:20:56.866222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.391 qpair failed and we were unable to recover it. 00:34:05.391 [2024-07-13 08:20:56.866348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.866374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.866571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.866600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.866768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.866797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.866967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.866996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.867162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.867188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.867383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.867411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.867544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.867572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.867719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.867745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.867924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.867950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.868137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.868166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.868330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.868359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.868562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.868588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.868737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.868764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.868913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.868957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.869094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.869122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.869326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.869352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.869471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.869497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.869680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.869722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.869898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.869927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.870107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.870132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.870290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.870315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.870460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.870488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.870656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.870684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.870848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.870884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.871079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.871105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.871259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.871285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.871480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.871508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.871695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.871723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.871890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.871920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.872115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.872144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.872284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.872313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.872476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.872505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.872738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.872763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.872915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.872941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.873083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.873126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.873285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.873314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.873480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.873506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.873670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.873699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.873895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.873921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.874118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.874146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.392 qpair failed and we were unable to recover it. 00:34:05.392 [2024-07-13 08:20:56.874313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.392 [2024-07-13 08:20:56.874339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.874566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.874592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.874798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.874827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.875023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.875052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.875196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.875222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.875415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.875444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.875583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.875611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.875798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.875826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.876003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.876029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.876222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.876251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.876417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.876445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.876622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.876647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.876766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.876791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.876941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.876968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.877114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.877143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.877309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.877337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.877538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.877564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.877732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.877761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.877920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.877946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.878094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.878120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.878268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.878293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.878413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.878456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.878591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.878620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.878760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.878788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.878944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.878970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.879116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.879142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.879328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.879357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.879545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.879574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.879745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.879774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.879976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.880005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.880136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.880165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.880297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.880325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.880501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.880526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.880677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.880719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.880882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.880911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.881078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.881106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.881277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.881302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.881447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.881490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.881649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.881678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.881844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.881881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.882040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.393 [2024-07-13 08:20:56.882066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.393 qpair failed and we were unable to recover it. 00:34:05.393 [2024-07-13 08:20:56.882222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.882247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.882461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.882487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.882638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.882663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.882813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.882839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.882996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.883022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.883144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.883169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.883344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.883372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.883518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.883545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.883691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.883734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.883895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.883924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.884088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.884117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.884283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.884308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.884427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.884452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.884629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.884672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.884880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.884909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.885143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.885168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.885403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.885431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.885597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.885625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.885788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.885817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.886000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.886026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.886180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.886206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.886358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.886401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.886561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.886590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.886759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.886785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.886921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.886948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.887089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.887114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.887286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.887315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.887510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.887541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.887755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.887780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.887941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.887967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.888112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.888141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.888292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.888317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.888440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.888466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.888612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.888640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.888810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.888838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.889022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.889048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.889223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.889251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.889444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.889472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.889632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.889660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.889841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.889872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.394 qpair failed and we were unable to recover it. 00:34:05.394 [2024-07-13 08:20:56.890042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.394 [2024-07-13 08:20:56.890070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.890218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.890247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.890439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.890467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.890620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.890646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.890824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.890850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.891007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.891035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.891243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.891268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.891446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.891471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.891641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.891670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.891833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.891861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.892050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.892080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.892250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.892275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.892442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.892471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.892636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.892664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.892829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.892857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.893032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.893059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.893254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.893289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.893488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.893513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.893694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.893720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.893892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.893918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.894117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.894146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.894315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.894344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.894505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.894544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.894733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.894758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.894945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.894974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.895139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.895167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.895308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.895336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.895510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.895539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.895729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.895758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.895918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.895947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.896112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.896140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.896336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.896361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.896536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.896564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.896726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.896754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.896936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.896962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.897137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.395 [2024-07-13 08:20:56.897162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.395 qpair failed and we were unable to recover it. 00:34:05.395 [2024-07-13 08:20:56.897311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.897339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.897496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.897524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.897694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.897719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.897877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.897903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.898078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.898104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.898267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.898295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.898455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.898484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.898715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.898741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.898949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.898978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.899174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.899203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.899385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.899410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.899559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.899585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.899704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.899747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.899920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.899959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.900125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.900154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.900337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.900363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.900484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.900526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.900765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.900793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.900975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.901004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.901151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.901183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.901338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.901364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.901507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.901533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.901730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.901758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.901937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.901963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.902115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.902157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.902353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.902378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.902510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.902537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.902688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.902715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.902873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.902900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.903093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.903122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.903275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.903313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.903509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.903545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.903760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.903793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.903971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.904001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.904171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.904216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.904397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.904423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.904587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.904615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.904783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.904811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.904980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.905009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.905149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.905175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.396 qpair failed and we were unable to recover it. 00:34:05.396 [2024-07-13 08:20:56.905301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.396 [2024-07-13 08:20:56.905326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.905443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.905470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.905588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.905614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.905733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.905759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.905912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.905938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.906067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.906092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.906242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.906284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.906436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.906462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.906609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.906652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.906841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.906875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.907036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.907063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.907190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.907216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.907365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.907391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.907535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.907561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.907711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.907739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.907893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.907920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.908098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.908123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.908292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.908322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.908586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.908642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.908861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.908897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.909113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.909145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.909507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.909562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.909777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.909807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.909960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.909986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.910137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.910163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.910369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.910398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.910692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.910754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.910933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.910959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.911108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.911134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.911287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.911312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.911464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.911551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.911746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.911775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.911910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.911952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.912109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.912138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.912276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.912304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.912541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.912567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.912728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.912753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.912875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.912901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.913047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.913089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.913259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.913285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.913433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.397 [2024-07-13 08:20:56.913478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.397 qpair failed and we were unable to recover it. 00:34:05.397 [2024-07-13 08:20:56.913637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.913665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.913833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.913861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.914064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.914090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.914253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.914281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.914481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.914509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.914700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.914728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.914873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.914899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.915031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.915057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.915211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.915251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.915435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.915462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.915690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.915716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.915888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.915915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.916036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.916063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.916182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.916208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.916386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.916412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.916575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.916604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.916770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.916798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.916977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.917013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.917237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.917267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.917464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.917497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.917793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.917848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.918072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.918102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.918255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.918281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.918447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.918476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.918650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.918677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.918875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.918904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.919079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.919104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.919296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.919325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.919494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.919520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.919669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.919695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.919860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.919899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.920048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.920074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.920291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.920319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.920538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.920599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.920771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.920797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.920934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.920978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.921112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.921141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.921382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.921430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.921601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.921627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.921754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.921780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.398 [2024-07-13 08:20:56.921933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.398 [2024-07-13 08:20:56.921977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.398 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.922146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.922175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.922346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.922372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.922501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.922527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.922694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.922720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.922919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.922949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.923119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.923146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.923323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.923352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.923522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.923553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.923729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.923772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.923911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.923938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.924062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.924089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.924244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.924273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.924560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.924615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.924795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.924821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.924980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.925007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.925198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.925227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.925432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.925466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.925657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.925686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.925837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.925872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.926048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.926078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.926265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.926296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.926471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.926497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.926621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.926648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.926778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.926804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.926957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.926986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.927177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.927203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.927341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.927369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.927556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.927585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.927717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.927745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.927940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.927970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.928137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.928166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.928294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.928322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.928483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.928511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.928709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.928735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.928888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.928931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.929082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.929108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.929228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.929253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.929403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.929429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.929613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.929638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.399 qpair failed and we were unable to recover it. 00:34:05.399 [2024-07-13 08:20:56.929802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.399 [2024-07-13 08:20:56.929830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.929997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.930026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.930208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.930234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.930400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.930429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.930589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.930615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.930767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.930792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.930981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.931009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.931205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.931234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.931425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.931451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.931645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.931673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.931838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.931864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.931989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.932031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.932193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.932222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.932413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.932441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.932644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.932669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.932882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.932908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.933139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.933183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.933385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.933420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.933614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.933644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.933837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.933877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.934096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.934128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.934472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.934525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.934743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.934772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.934961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.934994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.935184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.935217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.935512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.935560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.935788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.935817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.935992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.936022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.936241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.936273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.936566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.936621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.936825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.936856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.937010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.937039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.937207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.937236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.937406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.937431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.937577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.937602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.937727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.937767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.400 qpair failed and we were unable to recover it. 00:34:05.400 [2024-07-13 08:20:56.937935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.400 [2024-07-13 08:20:56.937963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.938127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.938155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.938321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.938347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.938544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.938572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.938768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.938793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.938966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.938996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.939194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.939220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.939348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.939373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.939527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.939553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.939810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.939839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.940047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.940073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.940250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.940278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.940417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.940446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.940711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.940759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.940940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.940966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.941087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.941130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.941304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.941332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.941575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.941624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.941800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.941827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.941963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.941989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.942164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.942207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.942390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.942416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.942588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.942614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.942738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.942764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.942920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.942947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.943123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.943149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.943359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.943385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.943555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.943583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.943730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.943758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.943927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.943956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.944123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.944149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.944345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.944374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.944554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.944603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.944764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.944792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.944969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.945000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.945170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.945199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.945365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.945394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.945634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.945660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.945816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.945841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.945974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.946001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.401 qpair failed and we were unable to recover it. 00:34:05.401 [2024-07-13 08:20:56.946168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.401 [2024-07-13 08:20:56.946196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.946464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.946515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.946713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.946739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.946888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.946917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.947113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.947139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.947283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.947308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.947436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.947463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.947627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.947656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.947853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.947895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.948033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.948062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.948232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.948258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.948410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.948436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.948561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.948586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.948754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.948782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.948947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.948974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.949126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.949169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.949352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.949378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.949529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.949554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.949702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.949728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.949845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.949895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.950054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.950082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.950249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.950278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.950451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.950478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.950672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.950700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.950864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.950898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.951066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.951094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.951266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.951291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.951417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.951442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.951591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.951616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.951784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.951813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.951978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.952004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.952171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.952200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.952402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.952427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.952578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.952603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.952730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.952761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.952954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.402 [2024-07-13 08:20:56.952984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.402 qpair failed and we were unable to recover it. 00:34:05.402 [2024-07-13 08:20:56.953146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.953174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.953337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.953365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.953507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.953534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.953687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.953713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.953861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.953908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.954086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.954112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.954302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.954327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.954523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.954551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.954686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.954714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.954875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.954904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.955050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.955075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.955207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.955248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.955375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.955403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.955577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.955603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.955779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.955805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.955979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.956008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.956173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.956201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.956369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.956397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.956565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.956590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.956739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.956766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.956914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.956956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.957125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.957153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.957315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.957340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.957480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.957523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.957686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.957715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.957886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.957920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.958114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.958140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.958310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.958338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.958502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.958529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.958688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.958716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.958863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.958899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.959044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.959088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.959233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.959262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.959451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.959479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.959650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.959675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.959795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.959820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.960030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.960058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.960227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.960255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.960427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.960452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.960624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.960652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.960839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.403 [2024-07-13 08:20:56.960874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.403 qpair failed and we were unable to recover it. 00:34:05.403 [2024-07-13 08:20:56.961036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.961065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.961210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.961235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.961393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.961435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.961601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.961630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.961826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.961851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.962004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.962030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.962151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.962193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.962332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.962360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.962521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.962550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.962694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.962720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.962872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.962899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.963087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.963116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.963314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.963342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.963543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.963569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.963737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.963764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.963903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.963932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.964084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.964112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.964238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.964263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.964411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.964436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.964604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.964632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.964781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.964806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.964950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.964976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.965144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.965172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.965342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.965372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.965537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.965577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.965769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.965795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.965918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.965963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.966085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.966114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.966243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.966271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.966416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.966441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.966585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.966627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.966787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.966815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.966976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.967005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.967175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.967200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.967357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.967385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.967547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.967575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.967765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.967793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.967992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.968018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.968157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.968186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.968320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.968348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.404 qpair failed and we were unable to recover it. 00:34:05.404 [2024-07-13 08:20:56.968516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.404 [2024-07-13 08:20:56.968544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.968715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.968740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.968937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.968966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.969131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.969159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.969289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.969317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.969464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.969492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.969669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.969712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.969877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.969905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.970078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.970105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.970262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.970287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.970457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.970485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.970652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.970680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.970815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.970843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.970992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.971017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.971172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.971214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.971405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.971433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.971597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.971625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.971788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.971814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.971998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.972024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.972187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.972213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.972375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.972403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.972598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.972624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.972818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.972846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.973002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.973030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.973221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.973254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.973420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.973445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.973604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.973632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.973796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.973824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.973963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.973992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.974139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.974165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.974316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.974358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.974518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.974546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.974685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.974713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.974857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.974888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.975087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.975116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.975256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.975284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.975484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.975512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.975655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.975682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.975808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.975851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.976002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.976032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.405 [2024-07-13 08:20:56.976232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.405 [2024-07-13 08:20:56.976260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.405 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.976402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.976428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.976575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.976600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.976778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.976806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.977000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.977029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.977169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.977194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.977350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.977376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.977507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.977532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.977705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.977731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.977851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.977883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.978037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.978066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.978260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.978289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.978451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.978479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.978654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.978680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.978826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.978851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.979025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.979050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.979173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.979199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.979349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.979374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.979521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.979547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.979686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.979715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.979853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.979902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.980078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.980104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.980291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.980316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.980473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.980501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.980663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.980695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.980873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.980899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.981094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.981123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.981274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.981302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.981431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.981459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.981631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.981657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.981819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.981848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.982045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.982073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.982230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.982258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.982407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.982434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.982586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.982628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.982816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.982841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.982969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.982995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.983143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.983168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.983297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.983341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.983472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.983500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.406 qpair failed and we were unable to recover it. 00:34:05.406 [2024-07-13 08:20:56.983660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.406 [2024-07-13 08:20:56.983689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.983861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.983904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.984026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.984051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.984202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.984229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.984408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.984436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.984580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.984606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.984759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.984800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.985001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.985027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.985151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.985178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.985326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.985353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.985526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.985556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.985693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.985722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.985910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.985939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.986116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.986141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.986265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.986290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.986464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.986489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.986655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.986683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.986831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.986856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.987010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.987056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.987238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.987264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.987458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.987487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.987684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.987709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.987912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.987941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.988080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.988110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.988288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.988321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.988467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.988493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.988645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.988671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.988880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.988909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.989114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.989140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.989285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.989311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.989426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.989468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.989670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.989695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.989820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.989846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.990026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.990053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.990232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.407 [2024-07-13 08:20:56.990260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.407 qpair failed and we were unable to recover it. 00:34:05.407 [2024-07-13 08:20:56.990427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.990456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.990655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.990680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.990855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.990887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.991070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.991099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.991282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.991308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.991462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.991487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.991638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.991664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.991881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.991911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.992078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.992106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.992263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.992291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.992466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.992491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.992682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.992710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.992879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.992908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.993071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.993100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.993294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.993320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.993483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.993511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.993683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.993712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.993922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.993949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.994096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.994121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.994321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.994349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.994553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.994578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.994707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.994732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.994880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.994905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.995071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.995099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.995291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.995319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.995488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.995516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.995689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.995714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.995862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.995898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.996043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.996068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.996218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.996248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.996395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.996421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.996570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.996613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.996780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.996810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.996977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.997006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.997176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.997202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.997376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.997404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.997532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.997561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.997695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.997723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.997926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.997953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.408 [2024-07-13 08:20:56.998077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.408 [2024-07-13 08:20:56.998103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.408 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:56.998270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:56.998313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:56.998482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:56.998510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:56.998649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:56.998675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:56.998828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:56.998877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:56.999044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:56.999069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:56.999222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:56.999262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:56.999401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:56.999426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:56.999620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:56.999648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:56.999822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:56.999850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.000014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.000043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.000210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.000236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.000433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.000462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.000621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.000649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.000780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.000808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.000960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.000985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.001179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.001207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.001372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.001401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.001530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.001558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.001700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.001726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.001919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.001948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.002077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.002106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.002277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.002303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.002454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.002479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.002647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.002675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.002871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.002900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.003063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.003091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.003263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.003289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.003451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.003480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.003642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.003670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.003837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.003892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.004104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.004130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.004302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.004330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.004463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.004491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.004627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.004655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.004832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.004858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.005036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.005065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.005196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.005223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.005394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.005419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.005594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.005619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.409 qpair failed and we were unable to recover it. 00:34:05.409 [2024-07-13 08:20:57.005763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.409 [2024-07-13 08:20:57.005792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.005931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.005960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.006154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.006182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.006354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.006379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.006509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.006535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.006676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.006702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.006877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.006906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.007107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.007133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.007254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.007280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.007431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.007456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.007602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.007631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.007762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.007787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.007936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.007978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.008170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.008198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.008340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.008365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.008538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.008564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.008711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.008736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.008920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.008949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.009111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.009140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.009284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.009310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.009486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.009529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.009678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.009708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.009846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.009883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.010060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.010086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.010287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.010315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.010504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.010533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.010664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.010692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.010862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.010894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.011017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.011060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.011193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.011221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.011387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.011419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.011585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.011611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.011774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.011802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.011972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.012001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.012136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.012164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.012309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.012335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.012476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.012502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.012702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.012730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.012888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.012918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.013055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.013081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.013232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.410 [2024-07-13 08:20:57.013257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.410 qpair failed and we were unable to recover it. 00:34:05.410 [2024-07-13 08:20:57.013446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.013474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.013640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.013669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.013813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.013838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.014021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.014047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.014195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.014224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.014353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.014382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.014551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.014576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.014743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.014771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.014928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.014955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.015131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.015174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.015366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.015392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.015559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.015587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.015769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.015794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.015953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.015979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.016127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.016153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.016319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.016348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.016513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.016541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.016704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.016733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.016901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.016927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.017077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.017103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.017257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.017301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.017500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.017525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.017669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.017695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.017857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.017891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.018058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.018086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.018225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.018253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.018397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.018423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.018572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.018613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.018768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.018797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.018962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.018996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.019171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.019197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.019318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.019345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.019513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.019541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.019714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.019740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.019895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.411 [2024-07-13 08:20:57.019922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.411 qpair failed and we were unable to recover it. 00:34:05.411 [2024-07-13 08:20:57.020088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.020117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.020276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.020305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.020476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.020501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.020645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.020670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.020793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.020834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.020995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.021023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.021195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.021224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.021377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.021402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.021589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.021615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.021791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.021821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.021983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.022012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.022212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.022238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.022410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.022437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.022593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.022621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.022782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.022810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.022981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.023007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.023153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.023196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.023336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.023363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.023559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.023584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.023732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.023758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.023904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.023930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.024074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.024102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.024268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.024298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.024472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.024498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.024652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.024677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.024828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.024853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.025037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.025065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.025202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.025227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.025380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.025405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.025577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.025602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.025783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.025808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.025959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.025986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.026181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.026209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.026374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.026403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.026566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.026599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.026770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.026795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.027000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.027029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.027192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.027221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.027350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.027378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.027526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.027553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.412 [2024-07-13 08:20:57.027708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.412 [2024-07-13 08:20:57.027733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.412 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.027884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.027910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.028055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.028080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.028231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.028257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.028399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.028431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.028603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.028631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.028796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.028824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.029031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.029057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.029205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.029234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.029390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.029416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.029594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.029636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.029812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.029838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.029960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.029986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.030136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.030162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.030313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.030339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.030488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.030515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.030677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.030706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.030842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.030875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.031052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.031078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.031225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.031251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.031372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.031416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.031613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.031642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.031830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.031859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.032037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.032063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.032186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.032211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.032358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.032383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.032507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.032532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.032706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.032731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.032901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.032930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.033099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.033128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.033284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.033312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.033482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.033507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.033655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.033698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.033869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.033898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.034077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.034106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.034226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.034251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.034401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.034427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.034603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.034629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.034800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.034828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.034979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.035005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.035153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.413 [2024-07-13 08:20:57.035179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.413 qpair failed and we were unable to recover it. 00:34:05.413 [2024-07-13 08:20:57.035354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.035383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.035569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.035598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.035735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.035760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.035880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.035907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.036088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.036116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.036284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.036312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.036447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.036473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.036624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.036665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.036801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.036831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.037009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.037036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.037159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.037185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.037332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.037375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.037544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.037572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.037713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.037742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.037918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.037945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.038105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.038133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.038304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.038332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.038522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.038550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.038690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.038716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.038873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.038899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.039051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.039076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.039220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.039248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.039448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.039473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.039618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.039646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.039819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.039847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.040034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.040063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.040232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.040257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.040427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.040455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.040618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.040646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.040804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.040833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.041017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.041043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.041190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.041231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.041390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.041418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.041610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.041642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.041787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.041813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.041959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.041986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.042131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.042173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.042317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.042346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.042484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.042510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.042659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.414 [2024-07-13 08:20:57.042703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.414 qpair failed and we were unable to recover it. 00:34:05.414 [2024-07-13 08:20:57.042873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.042902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.043038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.043067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.043238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.043264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.043425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.043453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.043633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.043658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.043806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.043849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.044065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.044090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.044256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.044284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.044468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.044496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.044691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.044719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.044880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.044907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.045096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.045124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.045287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.045316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.045472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.045500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.045631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.045656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.045806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.045831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.045991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.046020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.046150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.046178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.046319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.046345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.046522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.046565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.046749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.046775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.046903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.046929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.047080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.047105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.047253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.047294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.047492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.047518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.047718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.047746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.047911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.047938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.048057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.048082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.048259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.048286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.048453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.048481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.048642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.048667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.048859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.048893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.049074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.049100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.049251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.049298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.049467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.415 [2024-07-13 08:20:57.049493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.415 qpair failed and we were unable to recover it. 00:34:05.415 [2024-07-13 08:20:57.049688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.049716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.049878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.049907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.050033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.050061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.050246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.050272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.050432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.050460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.050598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.050626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.050760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.050787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.050967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.050994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.051161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.051189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.051355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.051383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.051521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.051550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.051720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.051745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.051900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.051927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.052043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.052069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.052263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.052292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.052485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.052510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.052688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.052714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.052861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.052920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.053108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.053136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.053304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.053329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.053483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.053509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.053659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.053684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.053830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.053856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.054076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.054102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.054258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.054287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.054433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.054466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.054633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.054662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.054823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.054849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.055028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.055057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.055196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.055225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.055370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.055399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.055594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.055620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.055816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.055845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.056064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.056090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.056287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.056315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.056455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.056480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.056628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.056675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.056845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.056887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.057017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.057045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.416 [2024-07-13 08:20:57.057249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.416 [2024-07-13 08:20:57.057275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.416 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.057425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.057451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.057657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.057686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.057877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.057907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.058081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.058107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.058282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.058311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.058479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.058508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.058685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.058712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.058887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.058914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.059050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.059078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.059271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.059300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.059480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.059505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.059678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.059704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.059888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.059917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.060075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.060105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.060304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.060332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.060471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.060497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.060662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.060705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.060853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.060896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.061064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.061093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.061273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.061298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.061470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.061499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.061683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.061712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.061844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.061877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.062024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.062050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.062248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.062277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.062416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.062450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.062614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.062643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.062843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.062880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.063024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.063052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.063199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.063226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.063378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.063403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.063553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.063579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.063711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.063737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.063898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.063924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.064093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.064122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.064269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.064295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.064441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.064467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.064614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.064641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.064769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.064811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.064982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.065009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.417 qpair failed and we were unable to recover it. 00:34:05.417 [2024-07-13 08:20:57.065129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.417 [2024-07-13 08:20:57.065155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.065331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.065360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.065534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.065560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.065735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.065761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.065936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.065965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.066140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.066166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.066319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.066360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.066560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.066585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.066731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.066761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.066893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.066922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.067116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.067144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.067305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.067331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.067505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.067533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.067693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.067721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.067886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.067916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.068064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.068091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.068259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.068287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.068418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.068445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.068608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.068637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.068807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.068832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.068975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.069001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.069125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.069166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.069355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.069384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.069574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.069599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.069815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.069840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.069970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.070001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.070132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.070159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.070309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.070334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.070499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.070528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.070655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.070683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.070847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.070880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.071055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.071080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.071233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.071274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.071440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.071468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.071661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.071689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.071863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.071895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.072065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.072094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.072225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.072254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.072397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.072425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.072596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.072622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.072775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.418 [2024-07-13 08:20:57.072801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.418 qpair failed and we were unable to recover it. 00:34:05.418 [2024-07-13 08:20:57.072980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.073010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.073169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.073198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.073344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.073369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.073513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.073538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.073692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.073718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.073891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.073920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.074096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.074122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.074273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.074299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.074472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.074501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.074695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.074724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.074894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.074920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.075074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.075101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.075256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.075300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.075438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.075466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.075643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.075668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.075811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.075837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.076003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.076031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.076207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.076232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.076383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.076409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.076559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.076584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.076734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.076759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.076915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.076942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.077094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.077120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.077286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.077315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.077496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.077526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.077646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.077671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.077818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.077844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.077978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.078004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.078155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.078196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.078332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.078361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.078530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.078555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.078750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.078779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.078938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.078967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.079125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.079154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.079359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.079385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.079520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.079548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.079739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.079767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.079937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.079966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.080119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.080145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.080291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.080333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.419 [2024-07-13 08:20:57.080523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.419 [2024-07-13 08:20:57.080551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.419 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.080744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.080773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.080954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.080980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.081112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.081138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.081285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.081311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.081461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.081486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.081678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.081703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.081849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.081896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.082098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.082124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.082275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.082300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.082425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.082450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.082651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.082680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.082817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.082845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.082984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.083013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.083154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.083179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.083329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.083371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.083568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.083593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.083742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.083767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.083920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.083946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.084119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.084147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.084313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.084339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.084517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.084558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.084697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.084722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.084899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.084925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.085049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.085080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.085257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.085283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.085440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.085466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.085583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.085609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.085755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.085781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.085950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.085979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.086130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.086156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.086281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.086307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.086514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.086543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.086702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.086730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.086880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.086906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.087096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.420 [2024-07-13 08:20:57.087124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.420 qpair failed and we were unable to recover it. 00:34:05.420 [2024-07-13 08:20:57.087264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.421 [2024-07-13 08:20:57.087292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.421 qpair failed and we were unable to recover it. 00:34:05.421 [2024-07-13 08:20:57.087434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.421 [2024-07-13 08:20:57.087461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.421 qpair failed and we were unable to recover it. 00:34:05.421 [2024-07-13 08:20:57.087615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.421 [2024-07-13 08:20:57.087640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.421 qpair failed and we were unable to recover it. 00:34:05.421 [2024-07-13 08:20:57.087823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.421 [2024-07-13 08:20:57.087870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.421 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.088065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.088091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.088263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.088291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.088464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.088490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.088655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.088683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.088843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.088879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.089026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.089054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.089243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.089268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.089443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.089468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.089660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.089688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.089818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.089846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.090026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.090052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.090207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.090249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.090441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.090469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.090628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.090655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.090789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.090814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.090977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.091020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.091186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.091214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.091403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.091431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.091599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.091624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.091794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.091822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.092011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.092037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.092177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.092202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.092354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.092380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.092557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.092585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.092744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.701 [2024-07-13 08:20:57.092777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.701 qpair failed and we were unable to recover it. 00:34:05.701 [2024-07-13 08:20:57.092949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.092978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.093117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.093142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.093260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.093287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.093463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.093505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.093665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.093693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.093832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.093858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.093989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.094015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.094192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.094221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.094424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.094449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.094626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.094651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.094766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.094791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.094958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.095001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.095159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.095187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.095365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.095390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.095513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.095539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.095727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.095755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.095916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.095958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.096133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.096159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.096307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.096335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.096501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.096530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.096698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.096726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.096895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.096922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.097088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.097116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.097278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.097307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.097500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.097529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.097668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.097693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.097902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.097931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.098098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.098126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.098255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.098283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.098460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.098485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.098634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.098678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.098873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.098901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.099068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.099096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.099270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.099296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.099459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.099487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.099679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.099707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.099881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.099911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.100077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.100102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.100273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.100301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.100490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.702 [2024-07-13 08:20:57.100522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.702 qpair failed and we were unable to recover it. 00:34:05.702 [2024-07-13 08:20:57.100685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.100713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.100850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.100883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.101036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.101077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.101259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.101284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.101461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.101486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.101633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.101658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.101805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.101830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.101988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.102030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.102219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.102247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.102420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.102445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.102572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.102597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.102739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.102764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.102963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.102993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.103141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.103166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.103311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.103336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.103544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.103573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.103761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.103789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.103962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.103988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.104162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.104191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.104387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.104413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.104531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.104558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.104732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.104757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.104889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.104915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.105038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.105064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.105239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.105265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.105410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.105435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.105640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.105669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.105859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.105892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.106033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.106059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.106177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.106202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.106390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.106418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.106609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.106637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.106765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.106793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.106944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.106971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.107091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.107117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.107293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.107321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.107450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.107478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.107617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.107642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.107819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.107845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.107882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xadb5b0 (9): Bad file descriptor 00:34:05.703 [2024-07-13 08:20:57.108079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.703 [2024-07-13 08:20:57.108115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.703 qpair failed and we were unable to recover it. 00:34:05.703 [2024-07-13 08:20:57.108330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.108360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.108555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.108589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.108767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.108797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.108977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.109002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.109126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.109151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.109276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.109301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.109450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.109476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.109674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.109702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.109851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.109882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.110063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.110088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.110254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.110282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.110422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.110450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.110649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.110678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.110820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.110849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.111002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.111028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.111178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.111204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.111346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.111371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.111521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.111547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.111707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.111733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.111897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.111926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.112083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.112112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.112281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.112306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.112426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.112467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.112629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.112657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.112826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.112851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.113026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.113055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.113254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.113280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.113431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.113456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.113602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.113627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.113779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.113821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.113971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.113998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.114192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.114220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.114409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.114454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.114618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.114644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.114795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.114836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.114981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.115010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.115158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.115183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.115329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.115371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.115563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.115591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.115738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.115764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.704 [2024-07-13 08:20:57.115890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.704 [2024-07-13 08:20:57.115917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.704 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.116035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.116060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.116215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.116240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.116405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.116433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.116558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.116586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.116763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.116790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.116997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.117026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.117212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.117241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.117391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.117417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.117559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.117585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.117753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.117781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.117961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.117987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.118151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.118184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.118321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.118350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.118523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.118548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.118716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.118744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.118934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.118962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.119165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.119191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.119315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.119341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.119495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.119521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.119701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.119727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.119876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.119904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.120069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.120097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.120258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.120284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.120403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.120428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.120577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.120605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.120805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.120831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.120989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.121015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.121153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.121182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.121335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.121360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.121508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.121534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.121681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.121707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.121832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.121858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.122019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.122061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.122196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.705 [2024-07-13 08:20:57.122226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.705 qpair failed and we were unable to recover it. 00:34:05.705 [2024-07-13 08:20:57.122423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.122449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.122588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.122617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.122807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.122835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.123007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.123033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.123203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.123232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.123397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.123424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.123594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.123620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.123745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.123787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.123948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.123975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.124148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.124174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.124322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.124349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.124512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.124540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.124736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.124762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.124894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.124921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.125045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.125071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.125286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.125312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.125438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.125463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.125582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.125613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.125735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.125761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.125961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.125990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.126133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.126162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.126310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.126336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.126459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.126484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.126627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.126652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.126768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.126793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.126924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.126950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.127118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.127146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.127344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.127369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.127511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.127540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.127729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.127757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.127929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.127956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.128147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.128176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.128307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.128336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.128508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.128534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.128688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.128713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.128864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.128894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.129067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.129093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.129261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.129290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.129492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.129518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.129636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.706 [2024-07-13 08:20:57.129661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.706 qpair failed and we were unable to recover it. 00:34:05.706 [2024-07-13 08:20:57.129838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.129863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.130043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.130071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.130260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.130286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.130441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.130466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.130641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.130669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.130809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.130834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.130990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.131016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.131177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.131206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.131378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.131404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.131525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.131551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.131726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.131751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.131879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.131905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.132052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.132078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.132249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.132278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.132443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.132469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.132633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.132662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.132863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.132896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.133045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.133075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.133225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.133252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.133395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.133438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.133611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.133636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.133787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.133812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.133996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.134025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.134160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.134185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.134316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.134342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.134517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.134542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.134684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.134709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.134893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.134935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.135128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.135157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.135322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.135348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.135476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.135501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.135651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.135676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.135801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.135827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.135991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.136018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.136230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.136258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.136423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.136449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.136618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.136647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.136791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.136820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.136999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.137026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.137176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.137203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.707 qpair failed and we were unable to recover it. 00:34:05.707 [2024-07-13 08:20:57.137352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.707 [2024-07-13 08:20:57.137377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.137493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.137518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.137666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.137691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.137818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.137844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.138048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.138075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.138277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.138305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.138464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.138493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.138663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.138688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.138854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.138891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.139083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.139112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.139278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.139303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.139495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.139523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.139653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.139682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.139885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.139912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.140053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.140082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.140235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.140262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.140416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.140441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.140589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.140638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.140830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.140855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.140983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.141008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.141183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.141212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.141339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.141367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.141565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.141590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.141757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.141785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.141920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.141948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.142123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.142148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.142341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.142370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.142505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.142533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.142705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.142730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.142896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.142925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.143057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.143085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.143239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.143265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.143457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.143486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.143647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.143676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.143844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.143885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.144063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.144091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.144236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.144264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.144392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.144417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.144535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.144560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.144764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.144792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.708 [2024-07-13 08:20:57.144941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.708 [2024-07-13 08:20:57.144968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.708 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.145093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.145135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.145333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.145358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.145505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.145532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.145680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.145705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.145845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.145893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.146087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.146112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.146285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.146313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.146473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.146502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.146640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.146666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.146854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.146887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.147054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.147082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.147229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.147255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.147372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.147397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.147565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.147594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.147762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.147787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.147979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.148008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.148139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.148171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.148345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.148370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.148561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.148589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.148781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.148810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.148981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.149007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.149198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.149226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.149388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.149416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.149563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.149589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.149795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.149824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.149998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.150027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.150194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.150219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.150411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.150440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.150629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.150658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.150829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.150854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.150995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.151020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.151140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.151165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.151281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.151307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.151459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.151502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.151678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.151703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.151851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.709 [2024-07-13 08:20:57.151889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.709 qpair failed and we were unable to recover it. 00:34:05.709 [2024-07-13 08:20:57.152088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.152117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.152255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.152284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.152453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.152478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.152651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.152694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.152872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.152899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.153074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.153099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.153236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.153266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.153460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.153486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.153661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.153686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.153882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.153911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.154083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.154109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.154261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.154287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.154412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.154438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.154623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.154648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.154802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.154827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.154984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.155029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.155190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.155219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.155363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.155389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.155567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.155609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.155771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.155799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.155968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.155998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.156119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.156146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.156293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.156318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.156511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.156536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.156707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.156735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.156899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.156928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.157094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.157120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.157285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.157313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.157474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.157503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.157703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.157728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.157899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.157928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.158097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.158125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.158297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.158322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.158445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.158471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.158678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.158704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.158851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.158882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.159057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.159085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.159276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.159304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.159468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.159493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.710 qpair failed and we were unable to recover it. 00:34:05.710 [2024-07-13 08:20:57.159662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.710 [2024-07-13 08:20:57.159690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.159827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.159855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.160022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.160047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.160196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.160239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.160410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.160435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.160560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.160586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.160739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.160764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.160912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.160941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.161086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.161112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.161241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.161266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.161438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.161463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.161605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.161630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.161750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.161775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.161925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.161950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.162073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.162099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.162222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.162247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.162396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.162421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.162569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.162594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.162786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.162814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.163006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.163034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.163203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.163229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.163398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.163434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.163626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.163654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.163805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.163830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.163970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.163997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.164193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.164222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.164391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.164415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.164570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.164595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.164789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.164817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.164974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.165001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.165155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.165198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.165357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.165386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.165522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.165548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.165703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.165729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.165878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.165904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.166052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.166078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.166242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.166270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.166432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.166461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.166620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.166645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.166838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.166871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.167044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.711 [2024-07-13 08:20:57.167069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.711 qpair failed and we were unable to recover it. 00:34:05.711 [2024-07-13 08:20:57.167215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.167242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.167391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.167434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.167612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.167637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.167786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.167812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.167978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.168006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.168195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.168224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.168366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.168392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.168543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.168568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.168749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.168777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.168950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.168976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.169103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.169147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.169309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.169338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.169480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.169505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.169680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.169721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.169880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.169909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.170054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.170079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.170198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.170223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.170373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.170398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.170586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.170611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.170778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.170806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.171020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.171050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.171165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.171191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.171338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.171381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.171548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.171576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.171746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.171771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.171896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.171939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.172101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.172129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.172304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.172329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.172458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.172501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.172665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.172693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.172847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.172878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.173007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.173033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.173201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.173229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.173374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.173400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.173529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.173555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.173706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.173731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.173848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.173881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.174032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.174058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.174200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.174229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.174397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.174423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.712 qpair failed and we were unable to recover it. 00:34:05.712 [2024-07-13 08:20:57.174565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.712 [2024-07-13 08:20:57.174591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.174790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.174819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.175022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.175048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.175200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.175226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.175415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.175443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.175593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.175618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.175747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.175773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.175971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.176006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.176206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.176236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.176426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.176458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.176796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.176850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.177031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.177057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.177263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.177292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.177580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.177632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.177837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.177863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.178044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.178074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.178267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.178296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.178471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.178497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.178648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.178673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.178864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.178894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.179017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.179048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.179245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.179273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.179471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.179499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.179669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.179694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.179860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.179908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.180070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.180098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.180267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.180292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.180493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.180521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.180787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.180839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.181024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.181050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.181195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.181222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.181411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.181439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.181580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.181605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.181779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.181821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.181998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.182027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.182174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.182201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.182362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.182388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.182563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.182588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.182738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.182764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.182941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.182970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.183134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.713 [2024-07-13 08:20:57.183162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.713 qpair failed and we were unable to recover it. 00:34:05.713 [2024-07-13 08:20:57.183336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.183361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.183481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.183523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.183665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.183695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.183868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.183895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.184043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.184087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.184261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.184286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.184442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.184467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.184621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.184647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.184794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.184820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.184996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.185022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.185195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.185224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.185436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.185493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.185633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.185658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.185781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.185808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.186015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.186043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.186213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.186239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.186396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.186425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.186566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.186594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.186789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.186814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.186937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.186966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.187084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.187109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.187289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.187314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.187465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.187490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.187665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.187692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.187855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.187896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.188047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.188072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.188221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.188263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.188470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.188495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.188663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.188691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.188839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.188864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.189049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.189074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.189227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.189252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.189407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.189451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.189630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.189655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.189799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.189825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.714 qpair failed and we were unable to recover it. 00:34:05.714 [2024-07-13 08:20:57.189980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.714 [2024-07-13 08:20:57.190007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.190184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.190209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.190395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.190423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.190586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.190614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.190795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.190820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.190966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.190992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.191155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.191183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.191352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.191378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.191538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.191567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.191734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.191759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.191936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.191962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.192132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.192161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.192357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.192385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.192526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.192552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.192681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.192707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.192856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.192888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.193084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.193110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.193238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.193264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.193409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.193434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.193622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.193648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.193812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.193841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.194046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.194072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.194220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.194246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.194408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.194436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.194589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.194617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.194788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.194814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.194930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.194956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.195088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.195114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.195293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.195319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.195482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.195510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.195673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.195702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.195877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.195902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.196059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.196087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.196271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.196299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.196443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.196468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.196661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.196689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.196852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.196888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.197085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.197110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.197279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.197308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.197476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.197504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.715 [2024-07-13 08:20:57.197677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.715 [2024-07-13 08:20:57.197702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.715 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.197854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.197905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.198071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.198100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.198299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.198325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.198521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.198549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.198734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.198761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.198900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.198926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.199071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.199116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.199281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.199309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.199480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.199506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.199631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.199675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.199824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.199855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.200019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.200045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.200213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.200243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.200436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.200464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.200630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.200655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.200848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.200883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.201063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.201088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.201247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.201272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.201467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.201495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.201655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.201683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.201848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.201878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.202003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.202029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.202178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.202203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.202416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.202441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.202625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.202653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.202819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.202847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.203055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.203080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.203245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.203272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.203435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.203464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.203615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.203641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.203789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.203814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.204026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.204055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.204258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.204284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.204408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.204433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.204611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.204652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.204815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.204839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.205000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.205026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.205210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.205236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.205355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.716 [2024-07-13 08:20:57.205380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.716 qpair failed and we were unable to recover it. 00:34:05.716 [2024-07-13 08:20:57.205550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.205592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.205757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.205787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.205939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.205966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.206095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.206120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.206326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.206354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.206499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.206524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.206718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.206746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.206878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.206907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.207051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.207077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.207270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.207299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.207490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.207518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.207687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.207718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.207862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.207897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.208059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.208087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.208277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.208302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.208457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.208481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.208609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.208635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.208826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.208851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.209016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.209044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.209206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.209234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.209399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.209424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.209597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.209625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.209785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.209813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.209985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.210011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.210178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.210206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.210401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.210430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.210602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.210627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.210795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.210823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.210958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.210987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.211152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.211177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.211350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.211393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.211561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.211589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.211758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.211783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.211926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.211960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.212091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.212117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.212263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.212290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.212432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.212461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.212620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.212648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.212849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.212880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.213078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.717 [2024-07-13 08:20:57.213106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.717 qpair failed and we were unable to recover it. 00:34:05.717 [2024-07-13 08:20:57.213280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.213305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.213453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.213478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.213624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.213652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.213814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.213842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.214020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.214046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.214213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.214241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.214381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.214406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.214558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.214583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.214732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.214776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.214967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.214997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.215136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.215161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.215278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.215308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.215468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.215496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.215662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.215688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.215857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.215893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.216072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.216098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.216275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.216301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.216509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.216537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.216677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.216705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.216846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.216877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.217033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.217059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.217256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.217285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.217457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.217482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.217644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.217673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.217871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.217899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.218098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.218124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.218324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.218352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.218510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.218538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.218739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.218764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.218900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.218929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.219090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.219118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.219308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.219334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.219499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.219527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.219659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.219687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.219833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.219859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.718 qpair failed and we were unable to recover it. 00:34:05.718 [2024-07-13 08:20:57.220014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.718 [2024-07-13 08:20:57.220057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.220204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.220231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.220385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.220410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.220585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.220611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.220756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.220784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.220968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.220994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.221105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.221149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.221314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.221342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.221504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.221529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.221657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.221683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.221806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.221832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.221993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.222018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.222181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.222209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.222342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.222370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.222543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.222569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.222724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.222749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.222901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.222931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.223053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.223079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.223243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.223272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.223464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.223492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.223661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.223687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.223840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.223870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.223999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.224039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.224210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.224235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.224358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.224385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.224517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.224543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.224695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.224722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.224915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.224943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.225103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.225132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.225303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.225329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.225493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.225522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.225707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.225733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.225906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.225932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.226068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.226096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.226265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.226290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.226461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.226485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.226646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.226674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.226807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.226835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.227008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.227034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.227196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.227224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.227408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.227435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.719 qpair failed and we were unable to recover it. 00:34:05.719 [2024-07-13 08:20:57.227609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.719 [2024-07-13 08:20:57.227634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.227806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.227834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.227987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.228016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.228180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.228205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.228352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.228379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.228528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.228570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.228734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.228759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.228889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.228916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.229106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.229134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.229284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.229309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.229427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.229452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.229638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.229666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.229835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.229860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.230023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.230049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.230224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.230250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.230370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.230399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.230599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.230627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.230787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.230815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.230966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.230992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.231193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.231221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.231362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.231390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.231573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.231599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.231736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.231764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.231903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.231932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.232106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.232132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.232295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.232323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.232484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.232512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.232706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.232731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.232899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.232928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.233128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.233156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.233332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.233357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.233507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.233533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.233694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.233722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.233889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.233915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.234035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.234077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.234268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.234297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.234439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.234464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.234658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.234686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.234880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.234908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.235107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.235132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.235319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.720 [2024-07-13 08:20:57.235345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.720 qpair failed and we were unable to recover it. 00:34:05.720 [2024-07-13 08:20:57.235491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.235517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.235701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.235726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.235876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.235902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.236054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.236080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.236198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.236223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.236419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.236447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.236641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.236669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.236815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.236840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.236964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.236990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.237168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.237196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.237343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.237369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.237490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.237516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.237683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.237711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.237910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.237937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.238133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.238165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.238294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.238322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.238496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.238521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.238638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.238663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.238878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.238904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.239081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.239106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.239243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.239271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.239461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.239490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.239659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.239685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.239846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.239879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.240018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.240047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.240189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.240215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.240363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.240389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.240593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.240621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.240828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.240853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.241010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.241035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.241201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.241230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.241368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.241394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.241583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.241611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.241752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.241780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.241965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.241991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.242138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.242163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.242307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.242349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.242521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.242547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.242690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.242716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.242879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.242908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.721 [2024-07-13 08:20:57.243082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.721 [2024-07-13 08:20:57.243108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.721 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.243237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.243280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.243545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.243595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.243795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.243821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.243981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.244008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.244203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.244231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.244377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.244402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.244602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.244631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.244833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.244858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.245046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.245071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.245243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.245272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.245427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.245455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.245603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.245628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.245748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.245773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.245968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.246001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.246177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.246202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.246393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.246421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.246588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.246616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.246795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.246820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.246970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.246996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.247164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.247193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.247363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.247388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.247566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.247591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.247737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.247779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.247924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.247950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.248081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.248107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.248281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.248309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.248454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.248479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.248673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.248702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.248861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.248894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.249067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.249092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.249257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.249285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.249453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.249481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.249647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.249672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.249863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.249899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.250059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.250088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.722 [2024-07-13 08:20:57.250259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.722 [2024-07-13 08:20:57.250285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.722 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.250428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.250470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.250632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.250660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.250830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.250855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.251053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.251082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.251259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.251285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.251462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.251488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.251628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.251656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.251828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.251854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.252006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.252032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.252157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.252183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.252357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.252400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.252597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.252622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.252792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.252821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.252996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.253026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.253190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.253216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.253386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.253416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.253582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.253610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.253776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.253806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.253979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.254008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.254183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.254210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.254353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.254379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.254528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.254571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.254704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.254733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.254907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.254934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.255057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.255099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.255263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.255292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.255437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.255462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.255613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.255639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.255791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.255817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.255990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.256016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.256186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.256214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.256382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.256411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.256602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.256628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.256793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.256821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.256996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.257024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.257171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.257197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.257323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.257350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.257529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.257557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.257733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.257759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.257915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.723 [2024-07-13 08:20:57.257943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.723 qpair failed and we were unable to recover it. 00:34:05.723 [2024-07-13 08:20:57.258085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.258114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.258285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.258311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.258461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.258487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.258637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.258663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.258813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.258839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.258996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.259022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.259187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.259216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.259389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.259414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.259587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.259615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.259808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.259837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.259981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.260006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.260128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.260155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.260276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.260302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.260423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.260450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.260596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.260640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.260841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.260874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.261019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.261045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.261196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.261226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.261352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.261394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.261544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.261569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.261744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.261769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.261939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.261968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.262142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.262168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.262284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.262325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.262517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.262546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.262688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.262713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.262873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.262900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.263096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.263125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.263301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.263326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.263494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.263523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.263698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.263723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.263879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.263905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.264048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.264076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.264250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.264279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.264468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.264493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.264612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.264638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.264762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.264788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.264961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.264987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.265150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.265178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.265347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.265372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.724 [2024-07-13 08:20:57.265517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.724 [2024-07-13 08:20:57.265544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.724 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.265663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.265705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.265883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.265912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.266086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.266111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.266275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.266303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.266470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.266499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.266692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.266717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.266887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.266916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.267092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.267118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.267270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.267296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.267462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.267490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.267682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.267707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.267882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.267908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.268053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.268078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.268271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.268300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.268440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.268466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.268583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.268608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.268780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.268813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.268982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.269008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.269127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.269153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.269341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.269367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.269545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.269571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.269767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.269795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.269957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.269987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.270157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.270183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.270350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.270379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.270552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.270579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.270756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.270781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.270925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.270954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.271148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.271177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.271368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.271394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.271528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.271557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.271718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.271747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.271889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.271916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.272067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.272093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.272304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.272333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.272495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.272521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.272687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.272716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.272879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.272908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.273078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.273103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.273262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.273291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.725 qpair failed and we were unable to recover it. 00:34:05.725 [2024-07-13 08:20:57.273441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.725 [2024-07-13 08:20:57.273467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.273587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.273613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.273725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.273751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.273913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.273966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.274200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.274231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.274426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.274460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.274667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.274696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.274893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.274919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.275122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.275150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.275406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.275455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.275648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.275674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.275844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.275879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.276070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.276098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.276241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.276267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.276419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.276445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.276665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.276720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.276923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.276954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.277126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.277154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.277306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.277332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.277485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.277511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.277710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.277738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.277892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.277922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.278121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.278147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.278316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.278345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.278564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.278614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.278757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.278783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.278944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.278988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.279144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.279173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.279348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.279374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.279523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.279564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.279703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.279746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.279888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.279914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.280117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.280145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.280344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.280369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.280514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.280539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.280708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.280737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.280875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.280905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.281073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.281099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.281300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.281329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.281556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.281607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.281809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.726 [2024-07-13 08:20:57.281834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.726 qpair failed and we were unable to recover it. 00:34:05.726 [2024-07-13 08:20:57.281988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.282017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.282226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.282251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.282380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.282406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.282526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.282552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.282715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.282743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.282889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.282916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.283046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.283087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.283212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.283240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.283408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.283433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.283624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.283652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.283814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.283843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.284022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.284048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.284242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.284270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.284511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.284563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.284731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.284756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.284907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.284954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.285119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.285148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.285341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.285366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.285536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.285565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.285737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.285762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.285914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.285939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.286108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.286136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.286300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.286329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.286498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.286523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.286691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.286719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.286846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.286883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.287063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.287088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.287284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.287313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.287629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.287683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.287840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.287872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.287998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.288024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.288214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.288242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.288415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.288440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.288635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.288664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.288825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.288853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.727 [2024-07-13 08:20:57.289046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.727 [2024-07-13 08:20:57.289072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.727 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.289236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.289264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.289423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.289451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.289615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.289641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.289805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.289834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.290007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.290036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.290206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.290231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.290357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.290383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.290500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.290525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.290651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.290676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.290863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.290899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.291072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.291101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.291294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.291320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.291515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.291543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.291698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.291724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.291887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.291914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.292087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.292115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.292252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.292280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.292455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.292481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.292653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.292681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.292871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.292899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.293098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.293124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.293257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.293286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.293453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.293481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.293653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.293678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.293876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.293905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.294070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.294098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.294268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.294294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.294418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.294444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.294639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.294668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.294838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.294869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.295033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.295061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.295254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.295282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.295431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.295457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.295616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.295641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.295835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.295863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.296020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.296048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.296174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.296200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.296354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.296379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.296529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.296555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.296750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.296778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.728 qpair failed and we were unable to recover it. 00:34:05.728 [2024-07-13 08:20:57.296918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.728 [2024-07-13 08:20:57.296947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.297118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.297145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.297298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.297324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.297473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.297515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.297661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.297686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.297812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.297838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.297996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.298041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.298202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.298227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.298381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.298406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.298559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.298586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.298736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.298762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.298923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.298951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.299149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.299177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.299347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.299373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.299525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.299551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.299699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.299725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.299907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.299933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.300056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.300081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.300257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.300282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.300436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.300461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.300633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.300662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.300795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.300823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.301008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.301034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.301206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.301231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.301431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.301459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.301597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.301622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.301768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.301809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.301944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.301975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.302127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.302153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.302309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.302335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.302460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.302487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.302636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.302662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.302863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.302896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.303068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.303096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.303263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.303289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.303412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.303439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.303593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.303619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.303770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.303797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.303998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.304027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.304197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.304225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.304372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.304397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.729 qpair failed and we were unable to recover it. 00:34:05.729 [2024-07-13 08:20:57.304543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.729 [2024-07-13 08:20:57.304569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.304771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.304796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.304915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.304941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.305118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.305146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.305296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.305324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.305462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.305492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.305644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.305687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.305853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.305890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.306089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.306115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.306286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.306316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.306507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.306532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.306706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.306731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.306871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.306900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.307038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.307066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.307214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.307239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.307391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.307416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.307564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.307590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.307713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.307738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.307926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.307955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.308148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.308176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.308343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.308369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.308531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.308560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.308725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.308753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.308947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.308973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.309128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.309154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.309308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.309333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.309488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.309514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.309717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.309745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.309884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.309913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.310077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.310103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.310262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.310290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.310490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.310518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.310696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.310721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.310873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.310900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.311049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.311092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.311260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.311286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.311433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.311478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.311675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.311703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.311843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.311875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.312040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.312068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.312240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.730 [2024-07-13 08:20:57.312268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.730 qpair failed and we were unable to recover it. 00:34:05.730 [2024-07-13 08:20:57.312440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.312465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.312595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.312621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.312795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.312839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.313025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.313052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.313226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.313255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.313441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.313467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.313641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.313666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.313843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.313880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.314052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.314080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.314253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.314280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.314399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.314424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.314598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.314640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.314810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.314836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.314996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.315022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.315180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.315205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.315377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.315403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.315564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.315593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.315753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.315781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.315937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.315971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.316093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.316118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.316297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.316325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.316498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.316524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.316675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.316701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.316851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.316899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.317033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.317058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.317241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.317287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.317478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.317506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.317682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.317707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.317876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.317905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.318060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.318089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.318264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.318289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.318469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.318497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.318657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.318685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.318825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.318850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.318972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.318998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.731 qpair failed and we were unable to recover it. 00:34:05.731 [2024-07-13 08:20:57.319178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.731 [2024-07-13 08:20:57.319206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.319398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.319423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.319555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.319583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.319777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.319805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.319975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.320002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.320195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.320223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.320350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.320378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.320543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.320569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.320770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.320798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.320987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.321020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.321163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.321188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.321340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.321382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.321544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.321572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.321724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.321752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.321927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.321954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.322108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.322134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.322284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.322310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.322425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.322451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.322569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.322596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.322738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.322763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.322954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.322983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.323151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.323176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.323347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.323372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.323497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.323523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.323670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.323695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.323838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.323863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.324017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.324043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.324202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.324246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.324438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.324463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.324624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.324653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.324783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.324811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.324982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.325008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.325136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.325162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.325336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.325361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.325504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.325529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.325675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.325700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.325900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.325928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.326104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.326129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.326301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.326329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.326485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.326513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.326657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.732 [2024-07-13 08:20:57.326683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.732 qpair failed and we were unable to recover it. 00:34:05.732 [2024-07-13 08:20:57.326807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.326833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.327030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.327059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.327231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.327257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.327427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.327456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.327661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.327686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.327861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.327902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.328021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.328047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.328224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.328266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.328412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.328441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.328590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.328615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.328796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.328822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.328947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.328973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.329126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.329151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.329324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.329352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.329511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.329537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.329660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.329701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.329824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.329852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.330014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.330040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.330195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.330220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.330368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.330411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.330603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.330628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.330743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.330786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.330980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.331009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.331182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.331208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.331357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.331382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.331587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.331614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.331809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.331834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.331984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.332011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.332190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.332218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.332354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.332379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.332526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.332553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.332699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.332741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.332914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.332940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.333105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.333134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.333307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.333335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.333511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.333536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.333656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.333698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.333858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.333891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.334091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.334116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.334244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.334270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.733 qpair failed and we were unable to recover it. 00:34:05.733 [2024-07-13 08:20:57.334440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.733 [2024-07-13 08:20:57.334482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.334647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.334672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.334835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.334863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.335034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.335063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.335256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.335282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.335444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.335472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.335639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.335667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.335833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.335859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.336045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.336078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.336252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.336280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.336448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.336474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.336629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.336654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.336844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.336881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.337025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.337051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.337206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.337231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.337378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.337404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.337550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.337576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.337742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.337770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.337937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.337966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.338110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.338136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.338335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.338364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.338502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.338532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.338680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.338705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.338857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.338887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.339038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.339082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.339252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.339278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.339442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.339471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.339612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.339641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.339804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.339829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.340000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.340030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.340186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.340214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.340383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.340409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.340587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.340616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.340756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.340785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.340954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.340980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.341135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.341160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.341313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.341356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.341561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.341587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.341756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.341785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.341946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.341976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.342148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.342173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.734 qpair failed and we were unable to recover it. 00:34:05.734 [2024-07-13 08:20:57.342294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.734 [2024-07-13 08:20:57.342336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.342528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.342556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.342723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.342748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.342859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.342889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.343072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.343100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.343263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.343288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.343405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.343445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.343605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.343637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.343779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.343805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.343953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.343995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.344165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.344193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.344339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.344366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.344489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.344515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.344701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.344730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.344907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.344933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.345104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.345134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.345311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.345337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.345511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.345536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.345737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.345765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.345925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.345955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.346126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.346151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.346308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.346334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.346484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.346509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.346656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.346681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.346807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.346834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.346989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.347016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.347162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.347187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.347356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.347385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.347552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.347581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.347717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.347743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.347885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.347927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.348086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.348115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.348308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.348334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.348485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.348511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.348672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.348706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.348887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.348919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.735 qpair failed and we were unable to recover it. 00:34:05.735 [2024-07-13 08:20:57.349113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.735 [2024-07-13 08:20:57.349146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.349426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.349477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.349653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.349680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.349810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.349837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.349973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.350000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.350191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.350218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.350374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.350399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.350576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.350601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.350725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.350750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.350944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.350974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.351141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.351170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.351337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.351367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.351562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.351590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.351721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.351749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.351889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.351915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.352060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.352104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.352271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.352300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.352481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.352506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.352701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.352729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.352890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.352919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.353067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.353092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.353262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.353304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.353561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.353612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.353781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.353807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.353961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.353987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.354166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.354191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.354379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.354404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.354553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.354594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.354731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.354760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.354901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.354928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.355051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.355076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.355198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.355224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.355378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.355403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.355526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.355553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.355675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.355700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.355859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.355901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.356070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.356098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.356258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.356287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.356433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.356459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.356588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.356614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.356826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.736 [2024-07-13 08:20:57.356851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.736 qpair failed and we were unable to recover it. 00:34:05.736 [2024-07-13 08:20:57.357036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.357062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.357231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.357259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.357550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.357613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.357792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.357818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.357973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.357999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.358122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.358164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.358359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.358384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.358579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.358608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.358768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.358796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.358979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.359004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.359157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.359202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.359412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.359464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.359605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.359630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.359777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.359819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.359962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.359991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.360189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.360215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.360381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.360409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.360601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.360626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.360778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.360804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.360988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.361018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.361194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.361220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.361371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.361398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.361559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.361588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.361725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.361753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.361936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.361963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.362158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.362186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.362453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.362505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.362680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.362706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.362824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.362850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.363013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.363056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.363256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.363281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.363397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.363440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.363630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.363658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.363827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.363853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.364068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.364097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.364261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.364289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.364454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.364479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.364681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.364710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.364886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.364915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.365094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.365119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.737 qpair failed and we were unable to recover it. 00:34:05.737 [2024-07-13 08:20:57.365300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.737 [2024-07-13 08:20:57.365328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.365494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.365523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.365720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.365745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.365883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.365912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.366076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.366104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.366275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.366301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.366469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.366497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.366684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.366712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.366848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.366879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.367005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.367031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.367203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.367248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.367397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.367422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.367536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.367561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.367752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.367777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.367897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.367923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.368099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.368125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.368298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.368323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.368469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.368495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.368664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.368692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.368855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.368890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.369063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.369089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.369245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.369270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.369413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.369454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.369651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.369676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.369851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.369886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.370052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.370080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.370245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.370270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.370434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.370463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.370647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.370675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.370816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.370841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.370996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.371022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.371193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.371222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.371369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.371394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.371544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.371586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.371748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.371776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.371974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.372000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.372166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.372194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.372372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.372409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.372610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.372640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.372812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.372841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.373028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.738 [2024-07-13 08:20:57.373057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.738 qpair failed and we were unable to recover it. 00:34:05.738 [2024-07-13 08:20:57.373291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.373317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.373494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.373522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.373689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.373717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.373886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.373912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.374081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.374110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.374275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.374303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.374499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.374525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.374696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.374726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.374889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.374918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.375093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.375122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.375295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.375323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.375458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.375486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.375656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.375681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.375813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.375838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.375965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.375991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.376211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.376237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.376366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.376392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.376570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.376658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.376834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.376859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.377060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.377088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.377226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.377254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.377450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.377475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.377705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.377731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.377918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.377945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.378122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.378148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.378338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.378367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.378553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.378605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.378775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.378802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.378926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.378952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.379132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.379174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.379347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.379372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.379605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.379634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.379806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.379832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.380074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.380100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.380273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.380302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.380520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.739 [2024-07-13 08:20:57.380549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.739 qpair failed and we were unable to recover it. 00:34:05.739 [2024-07-13 08:20:57.380700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.380725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.380901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.380943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.381106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.381134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.381302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.381328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.381493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.381522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.381683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.381711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.381886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.381913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.382090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.382119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.382273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.382298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.382439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.382464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.382591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.382616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.382791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.382834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.383040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.383067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.383257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.383290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.383524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.383550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.383667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.383692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.383871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.383897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.384094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.384123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.384272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.384297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.384442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.384467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.384650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.384706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.384883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.384909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.385033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.385060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.385211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.385252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.385422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.385447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.385615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.385644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.385830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.385859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.386020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.386046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.386220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.386263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.386541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.386591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.386768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.386794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.386962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.386991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.387164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.387190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.387367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.387392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.387536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.387565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.387729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.387756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.387904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.387931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.388105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.388149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.388312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.388340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.388475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.388501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.740 [2024-07-13 08:20:57.388658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.740 [2024-07-13 08:20:57.388684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.740 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.388837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.388864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.389060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.389086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.389250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.389278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.389475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.389500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.389654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.389680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.389842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.389875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.390035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.390064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.390212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.390238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.390415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.390440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.390613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.390642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.390801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.390827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.390984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.391011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.391204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.391233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.391376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.391402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.391554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.391580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.391734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.391761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.391884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.391910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.392086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.392111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.392228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.392254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.392430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.392455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.392579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.392604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.392747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.392772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.392920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.392946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.393083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.393112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.393248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.393278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.393449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.393476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.393600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.393641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.393808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.393836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.393987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.394014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.394210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.394239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.394404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.394432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.394599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.394624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.394765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.394790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.394931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.394959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.395099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.395125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.395279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.395304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.395426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.395452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.395601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.395627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.395797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.395826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.396008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.741 [2024-07-13 08:20:57.396044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.741 qpair failed and we were unable to recover it. 00:34:05.741 [2024-07-13 08:20:57.396213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.396239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.396364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.396389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.396562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.396604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.396776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.396801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.396992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.397022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.397185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.397214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.397409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.397434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.397629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.397657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.397833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.397859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.398042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.398068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.398256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.398284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.398419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.398447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.398593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.398618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.398816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.398845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.398984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.399014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.399163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.399188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.399339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.399365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.399483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.399509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.399684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.399709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.399879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.399908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.400073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.400102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.400270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.400297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.400448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.400474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.400588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.400614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.400789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.400815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.400940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.400967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.401126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.401171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.401338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.401364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.401529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.401559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.401725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.401753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.401898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.401924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.402076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.402120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.402255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.402285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.402459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.402485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.402635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.402660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.402850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.402884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.403058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.403084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.403239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.403281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.403410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.403438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.403615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.403645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.742 qpair failed and we were unable to recover it. 00:34:05.742 [2024-07-13 08:20:57.403809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.742 [2024-07-13 08:20:57.403835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.404034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.404060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.404207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.404233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.404381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.404406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.404583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.404608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.404730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.404756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.404902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.404945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.405156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.405181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.405326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.405352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.405519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.405548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.405710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.405738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.405930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.405956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.406128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.406156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.406295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.406323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.406495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.406520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.406637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.406678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.406872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.406901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.407069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.407095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.407249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.407275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.407426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.407467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.407664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.407689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.407850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.407884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.408045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.408075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.408269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.408295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.408418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.408444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.408596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.408622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.408794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.408819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.408937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.408964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.409141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.409169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.409341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.409366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.409508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.409534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.409705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.409733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.409870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.409896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.410089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.410117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.410318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.410343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:05.743 [2024-07-13 08:20:57.410516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:05.743 [2024-07-13 08:20:57.410541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:05.743 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.410664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.410690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.410887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.410916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.411087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.411113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.411241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.411290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.411466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.411495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.411662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.411688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.411814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.411840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.412006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.412032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.412152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.412177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.412351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.412393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.412518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.412546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.412716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.412742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.412908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.412938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.413129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.413157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.413325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.413350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.413500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.413541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.413745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.413770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.413921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.413947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.414067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.414092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.414268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.414310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.414481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.414508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.414676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.414704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.414906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.414935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.415084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.415110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.415261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.415286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.415503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.415528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.415648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.415673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.415870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.415900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.416067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.416092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.416267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.416292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.416439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.416464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.416614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.416658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.416844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.416875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.417051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.417080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.417240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.417269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.417435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.417460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.417652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.417681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.417840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.417875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.029 qpair failed and we were unable to recover it. 00:34:06.029 [2024-07-13 08:20:57.418074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.029 [2024-07-13 08:20:57.418100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.418245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.418273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.418443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.418468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.418589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.418615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.418791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.418834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.418994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.419027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.419200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.419225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.419347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.419389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.419583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.419608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.419754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.419780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.419918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.419971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.420131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.420160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.420326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.420351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.420518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.420546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.420709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.420737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.420906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.420932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.421089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.421117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.421314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.421343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.421505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.421531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.421691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.421719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.421852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.421888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.422060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.422086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.422206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.422249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.422435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.422463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.422632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.422658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.422823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.422851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.423021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.423049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.423193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.423219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.423408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.423437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.423594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.423623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.423795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.423821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.423997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.424023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.424179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.424222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.424393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.424418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.424615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.424643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.424816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.424845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.425004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.425030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.425180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.425205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.425427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.425453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.425623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.425649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.425813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.030 [2024-07-13 08:20:57.425841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.030 qpair failed and we were unable to recover it. 00:34:06.030 [2024-07-13 08:20:57.426006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.426035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.426231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.426256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.426447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.426475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.426666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.426694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.426859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.426894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.427058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.427087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.427252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.427280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.427449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.427474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.427594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.427619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.427827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.427855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.428027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.428053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.428253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.428282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.428457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.428482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.428631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.428656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.428800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.428826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.428988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.429029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.429175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.429202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.429396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.429425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.429606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.429635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.429825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.429850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.430007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.430036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.430203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.430229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.430404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.430429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.430545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.430571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.430745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.430788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.430959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.430986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.431108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.431134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.431304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.431331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.431480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.431506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.431652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.431678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.431828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.431854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.432035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.432061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.432236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.432262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.432408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.432434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.432591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.432617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.432788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.432816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.432963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.432994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.433165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.433191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.433346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.433372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.433520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.433545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.031 qpair failed and we were unable to recover it. 00:34:06.031 [2024-07-13 08:20:57.433721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.031 [2024-07-13 08:20:57.433746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.433886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.433915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.434073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.434102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.434293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.434319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.434515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.434550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.434682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.434712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.434903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.434930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.435099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.435128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.435283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.435312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.435499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.435524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.435680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.435705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.435852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.435902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.436080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.436106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.436277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.436307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.436471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.436500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.436691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.436716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.436870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.436897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.437085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.437114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.437284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.437310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.437431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.437456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.437594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.437619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.437769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.437795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.437973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.438000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.438204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.438233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.438431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.438457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.438600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.438629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.438814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.438843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.439002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.439028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.439203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.439245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.439377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.439405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.439572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.439597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.439725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.439769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.439906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.439936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.440134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.440160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.440350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.440378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.440499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.440528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.440695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.440720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.440912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.440941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.441129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.441157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.441350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.441376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.032 [2024-07-13 08:20:57.441500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.032 [2024-07-13 08:20:57.441526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.032 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.441668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.441694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.441811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.441837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.441994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.442020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.442192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.442226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.442369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.442394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.442590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.442619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.442746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.442774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.442947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.442973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.443094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.443137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.443327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.443355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.443490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.443515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.443647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.443673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.443823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.443864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.444042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.444067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.444186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.444227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.444390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.444418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.444568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.444593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.444749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.444774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.444921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.444964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.445141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.445166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.445336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.445365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.445529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.445558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.445724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.445750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.445871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.445897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.446098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.446126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.446267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.446292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.446448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.446492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.446668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.446694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.446814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.446841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.447002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.447028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.447175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.447205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.447358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.447384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.033 [2024-07-13 08:20:57.447530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.033 [2024-07-13 08:20:57.447555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.033 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.447707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.447735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.447894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.447921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.448085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.448114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.448282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.448310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.448506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.448531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.448667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.448696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.448856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.448893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.449062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.449089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.449243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.449269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.449414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.449439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.449583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.449613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.449786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.449815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.449985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.450014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.450174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.450199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.450357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.450400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.450539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.450568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.450740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.450766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.450923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.450953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.451119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.451148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.451318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.451344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.451496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.451521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.451670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.451697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.451856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.451888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.452030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.452059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.452221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.452249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.452426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.452451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.452624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.452649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.452825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.452851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.452975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.453001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.453175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.453218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.453382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.453410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.453573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.453598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.453723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.453749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.453897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.453924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.454045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.454071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.454266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.454294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.454463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.454489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.454644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.454670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.454793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.454819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.454962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.454988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.034 qpair failed and we were unable to recover it. 00:34:06.034 [2024-07-13 08:20:57.455105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.034 [2024-07-13 08:20:57.455131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.455330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.455358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.455515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.455543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.455739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.455764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.455887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.455913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.456089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.456130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.456301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.456327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.456444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.456486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.456649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.456677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.456851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.456882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.457056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.457089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.457247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.457275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.457440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.457465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.457660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.457688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.457881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.457909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.458075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.458100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.458267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.458296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.458432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.458461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.458635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.458661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.458813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.458838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.458996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.459041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.459222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.459248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.459359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.459400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.459563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.459591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.459760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.459787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.459940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.459967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.460162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.460190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.460357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.460384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.460529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.460555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.460746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.460775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.460976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.461002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.461160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.461188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.461348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.461376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.461574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.461600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.461725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.461751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.461928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.461954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.462116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.462143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.462293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.462319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.462432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.462458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.462605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.462630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.462792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.035 [2024-07-13 08:20:57.462820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.035 qpair failed and we were unable to recover it. 00:34:06.035 [2024-07-13 08:20:57.462990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.463019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.463193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.463219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.463377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.463406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.463591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.463619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.463762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.463787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.463934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.463960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.464091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.464117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.464262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.464288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.464410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.464436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.464590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.464621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.464784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.464810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.464963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.464989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.465144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.465170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.465321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.465347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.465485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.465513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.465647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.465677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.465873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.465899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.466025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.466051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.466197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.466222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.466371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.466397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.466561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.466590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.466756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.466784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.466926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.466952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.467079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.467106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.467295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.467322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.467497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.467523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.467735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.467763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.467900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.467929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.468088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.468114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.468233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.468275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.468437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.468465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.468655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.468681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.468840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.468873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.469043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.469072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.469248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.469273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.469463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.469492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.469700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.469739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.469941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.469973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.470168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.470202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.470469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.470525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.036 [2024-07-13 08:20:57.470715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.036 [2024-07-13 08:20:57.470744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.036 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.470933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.470967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.471125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.471156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.471305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.471330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.471473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.471514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.471701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.471757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.471968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.471995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.472131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.472160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.472353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.472378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.472523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.472554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.472724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.472752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.472924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.472950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.473098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.473123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.473298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.473326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.473605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.473656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.473852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.473882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.474052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.474080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.474301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.474351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.474523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.474549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.474710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.474739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.474934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.474960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.475103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.475129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.475277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.475321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.037 qpair failed and we were unable to recover it. 00:34:06.037 [2024-07-13 08:20:57.475595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.037 [2024-07-13 08:20:57.475646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.475845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.475877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.476013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.476038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.476190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.476231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.476402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.476428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.476584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.476610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.476726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.476752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.476929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.476956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.477125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.477154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.477336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.477362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.477495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.477521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.477641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.477667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.477819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.477845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.477996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.478042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.478195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.478223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.478369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.478395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.478571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.478617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.478744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.478770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.478954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.478982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.479156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.479199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.479363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.479407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.479605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.479633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.479792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.479817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.479975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.480002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.480201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.480245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.480417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.480460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.480662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.480711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.480893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.480919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.481122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.481164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.481332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.481374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.481530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.481574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.481723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.038 [2024-07-13 08:20:57.481749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.038 qpair failed and we were unable to recover it. 00:34:06.038 [2024-07-13 08:20:57.481914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.481944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.482106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.482147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.482323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.482368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.482538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.482581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.482704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.482729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.482879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.482906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.483107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.483135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.483324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.483366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.483543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.483587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.483765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.483791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.483961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.484005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.484174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.484217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.484389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.484433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.484587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.484613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.484764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.484789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.484982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.485029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.485170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.485212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.485397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.485423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.485599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.485624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.485773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.485798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.486002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.486048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.486272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.486315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.486515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.486545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.486736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.486764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.486910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.486938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.487087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.487113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.487316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.487344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.487509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.487537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.487726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.487754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.487924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.487950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.488096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.488121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.488317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.488345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.488498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.488525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.488689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.488717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.488876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.488919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.489065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.489093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.489225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.489252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.489413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.489440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.489624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.489652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.489815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.489840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.489969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.039 [2024-07-13 08:20:57.489994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.039 qpair failed and we were unable to recover it. 00:34:06.039 [2024-07-13 08:20:57.490118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.490161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.490308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.490333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.490513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.490541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.490706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.490734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.490920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.490948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.491117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.491157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.491323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.491353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.491504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.491537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.491705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.491733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.491861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.491913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.492075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.492100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.492297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.492325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.492458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.492486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.492746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.492774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.492959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.492985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.493173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.493202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.493367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.493394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.493526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.493553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.493716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.493744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.493918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.493943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.494066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.494091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.494299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.494327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.494482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.494510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.494703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.494731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.494882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.494911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.495064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.495090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.495253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.495281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.495442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.495469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.495600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.495628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.495793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.495822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.495967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.495993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.496139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.496164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.496276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.496319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.496460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.496487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.496646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.496673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.496836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.496870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.497019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.497045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.497186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.497211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.497374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.497402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.497536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.497563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.497756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.040 [2024-07-13 08:20:57.497784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.040 qpair failed and we were unable to recover it. 00:34:06.040 [2024-07-13 08:20:57.497962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.497988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.498136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.498161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.498303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.498328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.498453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.498496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.498660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.498689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.498874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.498906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.499038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.499064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.499175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.499205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.499381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.499410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.499600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.499627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.499749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.499777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.499943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.499969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.500123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.500166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.500302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.500330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.500527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.500555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.500745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.500773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.500919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.500944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.501094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.501120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.501283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.501311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.501448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.501476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.501647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.501675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.501847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.501881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.502074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.502099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.502254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.502278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.502454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.502479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.502649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.502676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.502825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.502849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.503010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.503036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.503211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.503239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.503405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.503430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.503626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.503654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.503817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.503844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.504048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.504088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.504273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.504299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.504472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.504521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.504703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.504747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.504911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.504939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.505061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.505086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.505237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.505263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.505386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.505413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.505617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.505661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.041 qpair failed and we were unable to recover it. 00:34:06.041 [2024-07-13 08:20:57.505815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.041 [2024-07-13 08:20:57.505841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.506045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.506089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.506290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.506319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.506510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.506559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.506691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.506719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.506908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.506935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.507107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.507135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.507334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.507379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.507579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.507623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.507781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.507807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.507975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.508019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.508190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.508237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.508442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.508486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.508635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.508661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.508811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.508836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.509037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.509081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.509284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.509329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.509470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.509513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.509658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.509684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.509832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.509859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.510078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.510121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.510325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.510369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.510533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.510576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.510721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.510747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.510949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.510993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.511169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.511210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.511411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.511439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.511604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.511630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.511779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.511806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.511972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.512021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.512174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.512216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.512416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.512445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.512642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.512667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.512783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.512816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.512954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.042 [2024-07-13 08:20:57.512997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.042 qpair failed and we were unable to recover it. 00:34:06.042 [2024-07-13 08:20:57.513173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.513217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.513391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.513437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.513589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.513615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.513788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.513814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.513940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.513966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.514132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.514175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.514382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.514426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.514580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.514606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.514757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.514783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.514954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.514997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.515167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.515214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.515377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.515420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.515546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.515571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.515754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.515783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.515982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.516012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.516169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.516197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.516387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.516416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2114022 Killed "${NVMF_APP[@]}" "$@" 00:34:06.043 [2024-07-13 08:20:57.516580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.516608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.516743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.516770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.516935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:06.043 [2024-07-13 08:20:57.516961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.517124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:06.043 [2024-07-13 08:20:57.517152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:06.043 [2024-07-13 08:20:57.517325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.517353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:06.043 [2024-07-13 08:20:57.517487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.517515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:06.043 [2024-07-13 08:20:57.517858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.517930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.518145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.518193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.518389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.518422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.518642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.518674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.518848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.518891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.519068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.519107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.519292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.519337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.519520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.519563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.519689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.519715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.519910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.519950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.520109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.520136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.520346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.520393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.520637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.520683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.520850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.520924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.521102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.521128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.521332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.043 [2024-07-13 08:20:57.521360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.043 qpair failed and we were unable to recover it. 00:34:06.043 [2024-07-13 08:20:57.521489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.521517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.521684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.521713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.521879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.521905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.522059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.522084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.522249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.522278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.522444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.522473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2114569 00:34:06.044 [2024-07-13 08:20:57.522635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.522665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2114569 00:34:06.044 [2024-07-13 08:20:57.522836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.522861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 2114569 ']' 00:34:06.044 [2024-07-13 08:20:57.523023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.523051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:06.044 [2024-07-13 08:20:57.523224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.523253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:06.044 [2024-07-13 08:20:57.523475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:06.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:06.044 [2024-07-13 08:20:57.523522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:06.044 [2024-07-13 08:20:57.523723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.523752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:06.044 [2024-07-13 08:20:57.523964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.523992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.524470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.524502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.524711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.524740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.524923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.524950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.525111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.525137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.525302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.525326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.525452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.525476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.525640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.525668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.525838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.525863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.526024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.526048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.526217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.526245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.526413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.526438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.526602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.526629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.526796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.526821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.526954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.526980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.527100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.527124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.527280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.527307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.527534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.527562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.527729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.527756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.527931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.527958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.528107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.528132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.528336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.528369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.528554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.528604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.528795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.528823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.529001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.044 [2024-07-13 08:20:57.529027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.044 qpair failed and we were unable to recover it. 00:34:06.044 [2024-07-13 08:20:57.529166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.529193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.529389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.529413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.529546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.529575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.529738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.529766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.529947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.529972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.530125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.530150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.530299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.530328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.530510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.530538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.530726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.530753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.530945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.530970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.531109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.531148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.531306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.531333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.531530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.531577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.531782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.531808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.531946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.531974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.532110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.532135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.532306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.532349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.532520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.532564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.532718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.532745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.532926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.532958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.533119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.533147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.533269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.533297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.533459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.533487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.533620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.533648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.533798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.533826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.533996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.534022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.534170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.534211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.534405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.534433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.534560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.534588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.534755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.534781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.534921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.534946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.535078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.535104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.535230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.535255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.535420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.535444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.535601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.535629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.535827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.535855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.536024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.536050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.536205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.536233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.536385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.536430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.536551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.536578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.536725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.045 [2024-07-13 08:20:57.536750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.045 qpair failed and we were unable to recover it. 00:34:06.045 [2024-07-13 08:20:57.536930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.536956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.537121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.537150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.537349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.537394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.537585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.537628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.537747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.537772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.537984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.538015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.538163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.538207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.538369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.538397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.538535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.538562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.538697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.538724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.538949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.538974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.539124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.539151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.539318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.539345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.539478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.539505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.539639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.539666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.539830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.539859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.540045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.540071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.540224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.540252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.540380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.540407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.540565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.540592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.540751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.540779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.540947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.540973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.541096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.541121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.541273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.541321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.541466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.541494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.541659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.541684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.541806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.541833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.541989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.542033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.542232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.542260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.542474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.542517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.542672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.542697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.542819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.542846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.543000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.543030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.543230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.543258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.543392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.543419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.543553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.046 [2024-07-13 08:20:57.543580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.046 qpair failed and we were unable to recover it. 00:34:06.046 [2024-07-13 08:20:57.543713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.543740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.543916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.543942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.544089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.544117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.544276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.544304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.544465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.544493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.544683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.544710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.544846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.544882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.545021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.545046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.545190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.545217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.545376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.545404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.545573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.545601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.545734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.545762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.545928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.545953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.546109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.546134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.546334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.546366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.546501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.546528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.546666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.546692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.546881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.546906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.547026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.547051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.547222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.547249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.547439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.547467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.547611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.547638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.547803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.547831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.548018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.548043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.548169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.548193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.548319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.548343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.548552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.548579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.548706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.548733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.548904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.548930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.549057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.549081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.549246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.549274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.549467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.549495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.549685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.549713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.549921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.549947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.550076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.550100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.550271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.550314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.550465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.550492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.550652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.550679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.550813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.550837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.550994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.551019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.551137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.047 [2024-07-13 08:20:57.551180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.047 qpair failed and we were unable to recover it. 00:34:06.047 [2024-07-13 08:20:57.551353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.551383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.551529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.551572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.551765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.551792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.551961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.551988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.552110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.552136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.552330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.552357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.552514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.552541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.552676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.552702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.552823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.552849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.553027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.553052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.553243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.553270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.553455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.553481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.553646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.553674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.553843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.553875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.554031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.554057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.554206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.554230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.554413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.554439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.554612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.554638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.554857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.554905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.555076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.555101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.555237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.555263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.555397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.555426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.555560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.555587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.555775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.555802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.555945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.555972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.556101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.556127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.556301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.556326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.556525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.556551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.556676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.556702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.556851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.556893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.557048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.557087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.557264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.557309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.557454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.557499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.557686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.557729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.557919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.557946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.558101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.558145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.558312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.558339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.558515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.558555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.558731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.558756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.558917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.558944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.559075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.048 [2024-07-13 08:20:57.559100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.048 qpair failed and we were unable to recover it. 00:34:06.048 [2024-07-13 08:20:57.559288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.559312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.559456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.559481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.559620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.559644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.559792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.559817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.559949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.559976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.560133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.560158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.560283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.560308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.560458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.560484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.560637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.560662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.560783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.560808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.560957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.560984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.561145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.561171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.561292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.561317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.561433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.561458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.561608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.561633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.561807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.561833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.561965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.561992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.562121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.562147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.562294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.562320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.562499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.562525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.562699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.562725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.562853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.562888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.563038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.563063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.563209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.563235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.563355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.563381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.563512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.563538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.563688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.563715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.563902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.563928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.564057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.564084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.564233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.564259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.564378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.564403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.564554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.564580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.564758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.564783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.564926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.564953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.565114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.565139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.565261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.565286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.565459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.565485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.565661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.565687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.565840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.565870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.566026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.566052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.566174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.049 [2024-07-13 08:20:57.566204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.049 qpair failed and we were unable to recover it. 00:34:06.049 [2024-07-13 08:20:57.566376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.566401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.566551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.566576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.566701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.566727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.566888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.566914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.567065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.567091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.567239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.567264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.567417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.567443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.567598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.567624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.567774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.567800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.567940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.567968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.568143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.568169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.568297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.568324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.568477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.568503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.568660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.568686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.568839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.568876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 [2024-07-13 08:20:57.568844] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.568931] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:06.050 [2024-07-13 08:20:57.569026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.569052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.569207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.569232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.569380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.569405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.569531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.569557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.569706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.569731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.569891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.569919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.570076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.570102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.570228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.570253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.570406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.570432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.570555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.570580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.570703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.570729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.570891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.570918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.571072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.571099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.571251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.571277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.571453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.571479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.571656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.571682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.571857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.571887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.572036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.572062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.572219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.572246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.572400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.572425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.572603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.572630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.572780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.050 [2024-07-13 08:20:57.572806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.050 qpair failed and we were unable to recover it. 00:34:06.050 [2024-07-13 08:20:57.572954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.572980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.573132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.573162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.573313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.573339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.573517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.573543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.573669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.573696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.573854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.573889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.574062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.574088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.574211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.574239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.574396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.574421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.574572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.574599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.574764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.574792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.574912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.574938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.575059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.575085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.575201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.575226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.575398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.575424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.575549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.575575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.575753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.575779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.575899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.575927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.576107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.576133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.576310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.576335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.576450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.576475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.576627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.576653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.576804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.576830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.577013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.577039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.577185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.577210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.577358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.577383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.577519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.577544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.577664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.577689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.577812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.577842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.578038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.578064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.578188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.578213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.578386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.578411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.578530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.578556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.578710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.578735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.578855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.578889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.579053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.579078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.579226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.579251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.579398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.051 [2024-07-13 08:20:57.579423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.051 qpair failed and we were unable to recover it. 00:34:06.051 [2024-07-13 08:20:57.579546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.579571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.579687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.579712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.579861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.579899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.580025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.580050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.580205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.580230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.580383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.580407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.580553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.580578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.580730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.580754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.580882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.580909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.581053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.581079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.581195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.581219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.581343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.581369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.581491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.581515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.581655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.581679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.581800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.581824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.581947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.581972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.582122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.582147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.582275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.582301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.582484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.582509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.582662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.582687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.582853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.582901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.583068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.583097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.583250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.583276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.583424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.583450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.583626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.583652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.583810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.583836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.583980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.584007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.584188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.584213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.584388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.584413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.584539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.584564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.584683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.584708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.584885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.584911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.585088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.585113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.585264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.585289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.585414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.585439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.585570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.585594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.585717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.585742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.585891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.585916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.586063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.586088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.586208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.586233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.586377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.052 [2024-07-13 08:20:57.586401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.052 qpair failed and we were unable to recover it. 00:34:06.052 [2024-07-13 08:20:57.586552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.586577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.586694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.586719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.586893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.586919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.587065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.587091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.587273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.587299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.587423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.587448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.587562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.587587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.587710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.587734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.587853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.587888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.588043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.588068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.588218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.588242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.588419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.588444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.588589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.588615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.588743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.588768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.588890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.588914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.589085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.589110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.589260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.589285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.589434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.589464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.589621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.589646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.589769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.589793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.589915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.589940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.590113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.590139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.590263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.590288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.590434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.590459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.590610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.590635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.590757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.590782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.590935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.590960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.591083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.591108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.591233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.591257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.591404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.591429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.591574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.591599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.591751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.591777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.591899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.591926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.592080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.592105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.592224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.592249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.592400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.592425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.592603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.592628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.592758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.592783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.592930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.592956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.593082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.593107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.593257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.593282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.053 qpair failed and we were unable to recover it. 00:34:06.053 [2024-07-13 08:20:57.593430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.053 [2024-07-13 08:20:57.593454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.593606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.593630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.593804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.593829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.593991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.594017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.594176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.594202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.594328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.594353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.594499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.594524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.594647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.594672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.594824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.594850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.595000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.595026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.595167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.595192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.595343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.595368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.595527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.595552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.595677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.595702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.595881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.595912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.596043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.596069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.596220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.596245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.596397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.596422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.596571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.596596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.596742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.596767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.596922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.596948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.597102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.597127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.597282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.597306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.597457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.597482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.597601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.597626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.597755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.597780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.597927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.597953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.598078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.598104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.598224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.598248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.598424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.598449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.598591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.598615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.598772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.598796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.598919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.598944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.599124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.599149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.599301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.599326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.599481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.599506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.599654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.599679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.599808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.599832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.599991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.600017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.600167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.600192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.600314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.600338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.054 [2024-07-13 08:20:57.600511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.054 [2024-07-13 08:20:57.600536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.054 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.600661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.600685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.600829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.600853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.601023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.601052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.601223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.601248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.601398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.601423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 EAL: No free 2048 kB hugepages reported on node 1 00:34:06.055 [2024-07-13 08:20:57.601550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.601575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.601726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.601751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.601872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.601897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.602044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.602069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.602242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.602267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.602413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.602438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.602592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.602616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.602738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.602763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.602893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.602918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.603092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.603117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.603267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.603292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.603448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.603473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.603631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.603655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.603774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.603802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.603966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.603991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.604142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.604167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.604285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.604309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.604456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.604480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.604615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.604640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.604765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.604790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.604926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.604950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.605098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.605124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.605277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.605301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.605455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.605479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.605652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.605681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.605833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.605858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.606049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.606074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.606231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.606255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.606403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.606428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.055 [2024-07-13 08:20:57.606548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.055 [2024-07-13 08:20:57.606573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.055 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.606726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.606751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.606928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.606954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.607129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.607154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.607325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.607351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.607503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.607528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.607675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.607699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.607830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.607854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.608004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.608030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.608149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.608173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.608308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.608334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.608458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.608482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.608652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.608677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.608796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.608821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.608995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.609021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.609171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.609195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.609323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.609348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.609486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.609510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.609662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.609686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.609834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.609860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.610050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.610074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.610245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.610270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.610397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.610422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.610577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.610601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.610752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.610776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.610924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.610950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.611099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.611125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.611245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.611269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.611392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.611417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.611549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.611573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.611727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.611752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.611907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.611934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.612062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.612086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.612240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.612265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.612394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.612419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.612564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.612588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.612739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.612768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.612896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.612927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.613076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.613100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.613248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.613273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.613424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.613449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.613567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.056 [2024-07-13 08:20:57.613591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.056 qpair failed and we were unable to recover it. 00:34:06.056 [2024-07-13 08:20:57.613760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.613784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.613932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.613957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.614099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.614123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.614272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.614297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.614425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.614449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.614602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.614626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.614778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.614802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.614934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.614960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.615114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.615138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.615263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.615287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.615439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.615464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.615639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.615664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.615789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.615815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.615954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.615981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.616159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.616184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.616309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.616335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.616484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.616510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.616655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.616679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.616826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.616850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.617015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.617041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.617187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.617211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.617341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.617369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.617551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.617576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.617702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.617726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.617878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.617903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.618030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.618057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.618204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.618229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.618405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.618429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.618555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.618580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.618707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.618732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.618858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.618891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.619068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.619093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.619219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.619243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.619376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.619400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.619525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.619551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.619697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.619738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.619898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.619931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.620087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.620119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.620286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.620312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.620462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.620487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.620613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.057 [2024-07-13 08:20:57.620637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.057 qpair failed and we were unable to recover it. 00:34:06.057 [2024-07-13 08:20:57.620788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.620813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.620940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.620965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.621088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.621114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.621290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.621316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.621465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.621490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.621630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.621654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.621802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.621827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.621987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.622014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.622138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.622163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.622318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.622343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.622460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.622484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.622657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.622682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.622847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.622876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.623032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.623058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.623216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.623241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.623391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.623415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.623587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.623611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.623788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.623813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.623952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.623979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.624110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.624135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.624249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.624274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.624421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.624449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.624615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.624640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.624767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.624792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.624940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.624966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.625093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.625118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.625266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.625293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.625415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.625439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.625597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.625623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.625771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.625798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.625948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.625973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.626109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.626135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.626285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.626311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.626457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.626482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.626602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.626628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.626783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.626808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.626940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.626966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.627141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.627167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.627289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.627315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.627465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.627490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.058 [2024-07-13 08:20:57.627634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.058 [2024-07-13 08:20:57.627659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.058 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.627783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.627808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.627959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.628000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.628158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.628183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.628302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.628326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.628472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.628497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.628622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.628647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.628802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.628826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.628962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.628991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.629121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.629146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.629296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.629320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.629462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.629488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.629663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.629689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.629843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.629875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.630033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.630057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.630177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.630203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.630382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.630407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.630556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.630582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.630734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.630760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.630894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.630920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.631098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.631124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.631277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.631303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.631423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.631449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.631609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.631634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.631774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.631800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.631949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.631976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.632109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.632135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.632284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.632310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.632437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.632462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.632592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.632618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.632769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.632795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.632922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.632948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.633098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.633123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.633247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.633272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.633426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.633452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.633603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.059 [2024-07-13 08:20:57.633629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.059 qpair failed and we were unable to recover it. 00:34:06.059 [2024-07-13 08:20:57.633803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.633828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.633968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.633996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.634125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.634152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.634266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.634292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.634420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.634446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.634604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.634630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.634778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.634804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.634956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.634983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.635147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.635188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.635344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.635373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.635505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.635532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.635690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.635719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.635874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.635902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.635932] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:06.060 [2024-07-13 08:20:57.636072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.636113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.636242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.636270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.636397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.636423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.636544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.636570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.636793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.636821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.636956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.636984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.637178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.637204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.637356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.637382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.637612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.637639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.637809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.637835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.637970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.637997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.638130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.638156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.638274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.638300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.638452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.638482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.638607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.638632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.638785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.638811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.638934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.638961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.639124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.639151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.639282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.639308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.639438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.639466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.639618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.639644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.639760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.639786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.639961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.639990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.640126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.640151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.640328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.640354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.640515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.640542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.640718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.640745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.640931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.060 [2024-07-13 08:20:57.640957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.060 qpair failed and we were unable to recover it. 00:34:06.060 [2024-07-13 08:20:57.641110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.641136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.641268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.641294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.641453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.641480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.641603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.641628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.641802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.641829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.641977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.642004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.642137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.642163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.642308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.642334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.642489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.642516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.642672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.642697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.642888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.642916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.643065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.643091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.643242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.643267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.643428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.643453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.643628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.643654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.643782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.643808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.643935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.643962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.644120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.644147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.644366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.644392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.644521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.644546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.644729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.644755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.644911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.644938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.645099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.645126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.645276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.645302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.645434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.645462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.645589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.645617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.645771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.645802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.645927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.645955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.646082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.646108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.646265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.646291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.646454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.646481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.646628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.646655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.646780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.646805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.646956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.646983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.647138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.647165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.647316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.647343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.647470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.647497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.647650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.647677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.647836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.647861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.647998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.648024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.648161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.061 [2024-07-13 08:20:57.648188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.061 qpair failed and we were unable to recover it. 00:34:06.061 [2024-07-13 08:20:57.648310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.648336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.648465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.648492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.648645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.648672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.648802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.648827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.648999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.649026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.649203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.649229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.649382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.649407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.649562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.649590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.649719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.649750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.649930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.649975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.650166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.650194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.650349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.650376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.650496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.650530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.650683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.650710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.650858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.650892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.651018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.651046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.651200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.651228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.651384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.651412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.651568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.651596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.651775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.651803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.651949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.651978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.652149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.652176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.652297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.652325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.652501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.652529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.652660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.652688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.652840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.652877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.653024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.653052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.653217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.653246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.653428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.653456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.653638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.653666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.653800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.653828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.653986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.654015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.654166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.654194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.654351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.654379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.654554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.654583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.654738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.654765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.654900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.654930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.655090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.655119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.655281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.655309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.655452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.655480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.655630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.062 [2024-07-13 08:20:57.655659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.062 qpair failed and we were unable to recover it. 00:34:06.062 [2024-07-13 08:20:57.655835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.655863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.656020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.656049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.656206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.656235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.656379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.656407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.656566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.656593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.656773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.656800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.656953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.656981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.657170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.657197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.657349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.657377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.657544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.657573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.657720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.657747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.657879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.657912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.658072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.658101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.658259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.658287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.658439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.658468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.658646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.658674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.658852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.658889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.659018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.659046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.659174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.659201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.659353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.659381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.659533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.659560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.659691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.659719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.659877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.659905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.660038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.660066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.660239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.660266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.660440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.660468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.660636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.660664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.660819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.660848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.661009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.661037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.661213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.661240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.661392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.661419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.661550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.661577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.661751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.661779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.661931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.661960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.063 qpair failed and we were unable to recover it. 00:34:06.063 [2024-07-13 08:20:57.662085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.063 [2024-07-13 08:20:57.662113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.662268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.662296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.662447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.662474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.662631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.662659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.662831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.662892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.663042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.663071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.663224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.663252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.663427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.663455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.663610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.663637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.663816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.663844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.664009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.664037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.664161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.664188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.664343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.664371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.664547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.664574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.664731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.664759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.664900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.664928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.665116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.665143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.665298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.665330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.665471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.665499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.665627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.665654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.665813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.665840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.665983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.666012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.666163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.666190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.666310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.666336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.666482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.666509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.666675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.666701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.666826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.666853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.666981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.667009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.667165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.667192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.667341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.667368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.667549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.667576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.667705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.667732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.667857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.667890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.668044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.668070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.668229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.668258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.668441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.668469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.668593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.668620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.668778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.668805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.668958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.668986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.669146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.669175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.064 [2024-07-13 08:20:57.669317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.064 [2024-07-13 08:20:57.669345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.064 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.669503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.669531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.669690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.669718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.669877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.669905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.670089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.670117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.670263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.670292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.670420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.670449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.670582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.670610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.670762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.670792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.670951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.670981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.671134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.671162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.671319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.671347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.671499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.671526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.671684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.671710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.671832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.671859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.672047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.672075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.672228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.672255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.672432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.672463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.672641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.672668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.672819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.672846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.673010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.673038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.673189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.673216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.673366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.673393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.673548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.673574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.673723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.673749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.673906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.673934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.674066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.674094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.674250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.674277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.674454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.674481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.674608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.674636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.674788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.674815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.674974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.675002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.675178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.675204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.675340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.675368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.675491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.675518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.675670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.675697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.675852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.675885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.676022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.676050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.676232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.676259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.676394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.676421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.676563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.676591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.065 [2024-07-13 08:20:57.676718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.065 [2024-07-13 08:20:57.676744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.065 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.676893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.676921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.677051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.677077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.677257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.677299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.677458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.677487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.677640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.677667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.677798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.677824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.677988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.678017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.678173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.678200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.678359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.678387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.678565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.678593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.678723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.678750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.678876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.678903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.679033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.679060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.679189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.679214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.679398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.679424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.679545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.679572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.679734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.679761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.679893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.679921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.680078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.680105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.680255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.680282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.680430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.680455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.680611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.680637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.680762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.680789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.680944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.680975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.681107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.681133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.681266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.681293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.681448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.681474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.681628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.681654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.681816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.681842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.682005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.682037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.682161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.682186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.682335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.682361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.682525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.682551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.682699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.682725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.682879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.682906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.683032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.683058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.683185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.683211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.683358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.683384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.683509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.683535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.683665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.683692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.683820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.066 [2024-07-13 08:20:57.683847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.066 qpair failed and we were unable to recover it. 00:34:06.066 [2024-07-13 08:20:57.684007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.684032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.684176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.684202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.684330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.684356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.684485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.684511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.684666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.684692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.684845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.684879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.685023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.685049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.685221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.685247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.685380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.685406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.685559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.685585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.685757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.685783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.685913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.685942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.686123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.686149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.686319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.686345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.686493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.686519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.686666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.686693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.686821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.686846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.686974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.687001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.687129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.687155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.687284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.687311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.687467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.687493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.687638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.687664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.687817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.687844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.687976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.688004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.688160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.688185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.688340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.688366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.688546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.688573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.688707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.688733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.688894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.688921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.689075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.689118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.689302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.689329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.689460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.689490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.689621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.689648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.689828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.689855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.690022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.690049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.690203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.690230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.690415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.690441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.690593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.690620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.690773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.690802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.690960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.690989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.691121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.067 [2024-07-13 08:20:57.691147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.067 qpair failed and we were unable to recover it. 00:34:06.067 [2024-07-13 08:20:57.691305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.691333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.691517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.691544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.691705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.691732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.691963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.691992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.692111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.692137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.692256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.692282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.692459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.692485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.692642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.692668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.692800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.692827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.692985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.693012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.693165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.693192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.693312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.693338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.693489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.693515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.693689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.693715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.693836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.693863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.694044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.694085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.694244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.694271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.694474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.694501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.694660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.694687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.694807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.694833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.694997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.695024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.695183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.695210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.695362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.695389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.695544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.695571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.695725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.695753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.695907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.695936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.696094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.696121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.696282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.696312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.696463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.696490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.696647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.696674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.696829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.696856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.697019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.697046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.697175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.697202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.697353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.697380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.068 [2024-07-13 08:20:57.697507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.068 [2024-07-13 08:20:57.697534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.068 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.697665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.697692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.697852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.697883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.698038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.698065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.698209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.698236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.698388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.698414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.698568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.698595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.698726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.698752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.698892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.698933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.699089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.699119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.699280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.699308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.699465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.699493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.699623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.699651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.699825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.699853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.699996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.700024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.700178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.700206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.700328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.700355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.700512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.700539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.700710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.700737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.700897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.700926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.701078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.701105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.701229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.701262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.701421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.701449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.701623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.701650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.701827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.701854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.702015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.702042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.702189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.702216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.702373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.702400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.702585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.702613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.702782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.702810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.702987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.703016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.703185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.703212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.703371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.703398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.703550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.703578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.703735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.703762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.703902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.703929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.704089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.704116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.704265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.704292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.704419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.704446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.704572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.704600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.704729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.704757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.069 qpair failed and we were unable to recover it. 00:34:06.069 [2024-07-13 08:20:57.704918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.069 [2024-07-13 08:20:57.704949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.705129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.705156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.705284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.705311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.705453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.705479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.705636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.705663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.705816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.705842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.705976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.706005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.706156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.706189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.706367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.706395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.706534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.706562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.706742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.706769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.706896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.706924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.707067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.707093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.707248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.707275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.707439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.707466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.707618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.707646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.707769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.707797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.707958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.707985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.708164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.708190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.708324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.708361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.708513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.708541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.708673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.708701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.708850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.708886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.709063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.709089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.709236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.709264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.709418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.709445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.709624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.709651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.709781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.709809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.709943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.709972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.710131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.710159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.710339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.710367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.710547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.710575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.710707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.710735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.710896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.710926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.711087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.711114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.711259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.711286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.711463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.711489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.711619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.711646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.711775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.711802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.711929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.711958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.712091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.712119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.070 qpair failed and we were unable to recover it. 00:34:06.070 [2024-07-13 08:20:57.712268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.070 [2024-07-13 08:20:57.712295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.712428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.712456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.712605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.712632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.712779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.712806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.712942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.712970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.713091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.713117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.713270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.713297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.713428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.713456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.713581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.713609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.713730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.713758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.713905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.713933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.714059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.714087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.714264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.714291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.714433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.714459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.714641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.714667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.714799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.714827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.715008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.715049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.715185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.715214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.715373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.715402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.715541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.715570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.715706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.715737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.715894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.715923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.716049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.716076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.716259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.716287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.716464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.716490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.716677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.716705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.716873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.716902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.717036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.717064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.717196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.717224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.717379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.717408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.717562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.717590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.717738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.717765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.717914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.717943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.718078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.718112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.718273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.718300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.718447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.718475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.718636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.718664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.718819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.718846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.719011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.719039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.719171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.719198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.719381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.071 [2024-07-13 08:20:57.719408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.071 qpair failed and we were unable to recover it. 00:34:06.071 [2024-07-13 08:20:57.719572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.719600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.719760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.719788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.719915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.719943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.720120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.720148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.720303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.720330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.720481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.720508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.720639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.720667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.720822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.720849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.720980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.721008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.721133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.721160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.721340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.721368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.721486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.721513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.721648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.721675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.721835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.721863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.722029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.722057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.722215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.722243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.722393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.722421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.722602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.722630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.722762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.722791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.722957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.722985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.723140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.723168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.723318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.723346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.723468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.723496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.723679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.723707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.723862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.723895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.724021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.724049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.724203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.724230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.724375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.724403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.724555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.724583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.724764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.724791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.724947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.724975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.725126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.725154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.725304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.725336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.725496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.725522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.072 qpair failed and we were unable to recover it. 00:34:06.072 [2024-07-13 08:20:57.725701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.072 [2024-07-13 08:20:57.725728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.725891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.725918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.726107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.726134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.726269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.726295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.726422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.726449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.726606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.726632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.726783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.726811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.726942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.726970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.727115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.727142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.727295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.727321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.727466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.727493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.727673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.727700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.727856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.727888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.728045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.728072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.728226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.728253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.728415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.728442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.728568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.728595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.728738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.728765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.728934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.728961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.729089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.729118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.729252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.729278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.729393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.729420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.729533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.729560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.729684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.729712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.729843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.729875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.729996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.730031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.073 qpair failed and we were unable to recover it. 00:34:06.073 [2024-07-13 08:20:57.730110] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:06.073 [2024-07-13 08:20:57.730145] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:06.073 [2024-07-13 08:20:57.730161] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:06.073 [2024-07-13 08:20:57.730158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.073 [2024-07-13 08:20:57.730174] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:06.074 [2024-07-13 08:20:57.730185] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:06.074 [2024-07-13 08:20:57.730184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.730253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:06.074 [2024-07-13 08:20:57.730352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.730309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:06.074 [2024-07-13 08:20:57.730378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.730359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:06.074 [2024-07-13 08:20:57.730362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:06.074 [2024-07-13 08:20:57.730509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.730534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.730689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.730715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.730877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.730905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.731114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.731154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.731298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.731324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.731447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.731473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.731603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.731629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.731808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.731839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.732005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.732041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.732177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.732212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.732378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.732405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.732596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.732622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.732747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.732773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.732899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.732926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.733053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.733081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.733204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.733231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.733382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.733408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.733560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.733586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.733702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.733728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.733849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.733887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.734031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.734065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.734202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.734230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.074 [2024-07-13 08:20:57.734374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.074 [2024-07-13 08:20:57.734400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.074 qpair failed and we were unable to recover it. 00:34:06.352 [2024-07-13 08:20:57.734520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.352 [2024-07-13 08:20:57.734547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.352 qpair failed and we were unable to recover it. 00:34:06.352 [2024-07-13 08:20:57.734718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.352 [2024-07-13 08:20:57.734744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.352 qpair failed and we were unable to recover it. 00:34:06.352 [2024-07-13 08:20:57.734902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.352 [2024-07-13 08:20:57.734930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.352 qpair failed and we were unable to recover it. 00:34:06.352 [2024-07-13 08:20:57.735072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.352 [2024-07-13 08:20:57.735099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.352 qpair failed and we were unable to recover it. 00:34:06.352 [2024-07-13 08:20:57.735246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.352 [2024-07-13 08:20:57.735272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.352 qpair failed and we were unable to recover it. 00:34:06.352 [2024-07-13 08:20:57.735386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.352 [2024-07-13 08:20:57.735413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.352 qpair failed and we were unable to recover it. 00:34:06.352 [2024-07-13 08:20:57.735574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.352 [2024-07-13 08:20:57.735601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.352 qpair failed and we were unable to recover it. 00:34:06.352 [2024-07-13 08:20:57.735747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.352 [2024-07-13 08:20:57.735773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.352 qpair failed and we were unable to recover it. 00:34:06.352 [2024-07-13 08:20:57.735920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.352 [2024-07-13 08:20:57.735948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.352 qpair failed and we were unable to recover it. 00:34:06.352 [2024-07-13 08:20:57.736083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.352 [2024-07-13 08:20:57.736111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.352 qpair failed and we were unable to recover it. 00:34:06.352 [2024-07-13 08:20:57.736257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.352 [2024-07-13 08:20:57.736285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.352 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.736404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.736431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.736583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.736612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.736734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.736760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.736894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.736921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.737055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.737082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.737240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.737267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.737485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.737511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.737661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.737687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.737812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.737838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.737991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.738033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.738191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.738219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.738378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.738406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.738563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.738589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.738726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.738753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.738907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.738935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.739068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.739095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.739253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.739281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.739481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.739519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.739710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.739736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.739893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.739922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.740048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.740075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.740204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.740240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.740376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.740404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.740529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.740556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.740680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.740707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.740861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.740925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.741077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.741103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.741282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.741314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.741463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.741489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.741618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.741645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.741770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.741798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.741961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.741989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.742119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.742147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.742326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.742353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.742478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.742505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.742663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.742689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.742843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.742875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.743039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.743066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.743216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.743243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.353 qpair failed and we were unable to recover it. 00:34:06.353 [2024-07-13 08:20:57.743369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.353 [2024-07-13 08:20:57.743396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.743518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.743545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.743688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.743715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.743923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.743955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.744089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.744116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.744242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.744269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.744389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.744416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.744596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.744623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.744749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.744775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.744928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.744955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.745071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.745097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.745246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.745271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.745424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.745451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.745579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.745606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.745731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.745757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.745906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.745939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.746056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.746082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.746228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.746254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.746377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.746404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.746549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.746575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.746695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.746721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.746878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.746905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.747024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.747050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.747241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.747267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.747399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.747426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.747584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.747610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.747734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.747761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.747886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.747915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.748067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.748093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.748250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.748276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.748436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.748462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.748589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.748615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.748766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.748792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.748943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.748970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.749119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.749145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.749267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.749294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.749432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.749458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.749611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.749637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.749779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.749805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.749935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.749962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.750107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.750133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.354 qpair failed and we were unable to recover it. 00:34:06.354 [2024-07-13 08:20:57.750278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.354 [2024-07-13 08:20:57.750304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.750423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.750449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.750576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.750603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.750732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.750758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.750882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.750909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.751041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.751067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.751195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.751222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.751371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.751397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.751541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.751567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.751692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.751718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.751864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.751905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.752065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.752092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.752211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.752237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.752370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.752397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.752533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.752560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.752712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.752742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.752898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.752926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.753058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.753085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.753241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.753269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.753425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.753451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.753581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.753608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.753728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.753755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.753893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.753920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.754043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.754069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.754187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.754213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.754365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.754391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.754536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.754562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.754682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.754708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.754863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.754923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.755119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.755148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.755304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.755330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.755510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.755537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.755654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.755682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.755833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.755860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.756003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.756031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.756168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.756195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.756350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.756377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.756506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.756533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.756696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.756723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.756883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.756911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.757038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.757066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.355 qpair failed and we were unable to recover it. 00:34:06.355 [2024-07-13 08:20:57.757189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.355 [2024-07-13 08:20:57.757215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.757352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.757379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.757511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.757539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.757662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.757689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.757822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.757849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.757990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.758019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.758168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.758195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.758348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.758375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.758501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.758528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.758679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.758706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.758857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.758892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.759046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.759074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.759218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.759245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.759375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.759402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.759586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.759613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.759767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.759794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.759942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.759970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.760122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.760149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.760307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.760334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.760478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.760505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.760626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.760653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.760771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.760798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.760927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.760956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.761113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.761141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.761298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.761325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.761448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.761474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.761619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.761648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.761884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.761921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.762084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.762110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.762258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.762285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.762414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.762441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.762586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.762614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.762809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.762839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.763012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.763054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.763204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.763233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.763368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.763394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.763515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.763543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.763663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.356 [2024-07-13 08:20:57.763690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.356 qpair failed and we were unable to recover it. 00:34:06.356 [2024-07-13 08:20:57.763872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.763900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.764070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.764096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.764223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.764250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.764401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.764433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.764584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.764611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.764732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.764759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.764924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.764952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.765079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.765106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.765232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.765259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.765418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.765444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.765593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.765620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.765754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.765782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.765905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.765932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.766086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.766113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.766264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.766291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.766441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.766468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.766627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.766654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.766806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.766833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.766993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.767020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.767141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.767168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.767298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.767326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.767457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.767484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.767673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.767700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.767816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.767843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.767982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.768008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.768138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.768164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.768309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.768335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.768467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.768495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.768612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.768638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.768792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.768819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.768971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.769012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.769187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.769227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.769351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.769379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.769500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.769527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.769660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.357 [2024-07-13 08:20:57.769687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.357 qpair failed and we were unable to recover it. 00:34:06.357 [2024-07-13 08:20:57.769842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.769878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.770022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.770049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.770178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.770204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.770324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.770350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.770495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.770522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.770642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.770668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.770815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.770842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.770979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.771008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.771128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.771159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.771312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.771338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.771452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.771479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.771662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.771689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.771851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.771883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.772008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.772035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.772179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.772205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.772405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.772431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.772559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.772587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.772752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.772782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.772932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.772960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.773103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.773137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.773263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.773290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.773436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.773465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.773585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.773612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.773765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.773806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.773974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.774003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.774137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.774177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.774338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.774366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.774506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.774533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.774647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.774674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.774800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.774826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.774982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.775009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.775151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.775178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.775305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.775332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.775502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.775529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.775656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.775682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.775799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.775831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.775985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.776012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.776145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.776172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.776343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.776370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.776502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.776529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.358 [2024-07-13 08:20:57.776655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.358 [2024-07-13 08:20:57.776682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.358 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.776818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.776845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.777023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.777050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.777183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.777210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.777328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.777354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.777511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.777538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.777670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.777697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.777885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.777913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.778062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.778089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.778253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.778280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.778443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.778469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.778592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.778619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.778744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.778771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.778932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.778959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.779101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.779128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.779260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.779287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.779409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.779435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.779612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.779639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.779760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.779787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.779935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.779962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.780089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.780116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.780254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.780281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.780426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.780453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.780579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.780605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.780722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.780749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.780890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.780942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.781096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.781124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.781362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.781390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.781522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.781549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.781712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.781739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.781891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.781930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.782090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.782119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.782239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.782267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.782390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.782417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.782531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.782558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.782709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.782736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.782882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.782935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.783093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.783121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.783272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.783299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.783445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.783472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.783625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.783652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.783799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.359 [2024-07-13 08:20:57.783826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.359 qpair failed and we were unable to recover it. 00:34:06.359 [2024-07-13 08:20:57.783962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.783990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.784141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.784168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.784278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.784305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.784452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.784479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.784624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.784651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.784792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.784819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.784957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.784985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.785131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.785157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.785316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.785344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.785498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.785525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.785647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.785674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.785797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.785824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.785946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.785973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.786094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.786121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.786256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.786283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.786468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.786495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.786626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.786652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.786776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.786803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.786949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.786977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.787130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.787163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.787281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.787308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.787450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.787492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.787632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.787661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.787805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.787833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.787971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.787999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.788150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.788178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.788336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.788363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.788499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.788529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.788659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.788687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.788839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.788873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.789031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.789057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.789186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.789214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.789341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.789369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.789502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.789529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.789667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.789700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.789825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.789852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.790023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.790052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.790180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.790208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.790335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.790362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.790528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.790555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.790679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.360 [2024-07-13 08:20:57.790705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.360 qpair failed and we were unable to recover it. 00:34:06.360 [2024-07-13 08:20:57.790857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.790891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.791016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.791042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.791159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.791186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.791308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.791334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.791462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.791489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.791636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.791662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.791790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.791817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.791958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.791988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.792128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.792169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.792327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.792356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.792479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.792506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.792624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.792651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.792880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.792909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.793034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.793061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.793208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.793235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.793396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.793423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.793552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.793579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.793724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.793751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.793888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.793924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.794054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.794081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.794239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.794271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.794395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.794421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.794557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.794584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.794739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.794768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.794900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.794929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.795086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.795113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.795238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.795266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.795417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.795443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.795586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.795613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.795748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.795776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.795910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.795937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.796096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.796122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.796248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.796275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.796395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.361 [2024-07-13 08:20:57.796421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.361 qpair failed and we were unable to recover it. 00:34:06.361 [2024-07-13 08:20:57.796559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.796588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.796744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.796771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.796950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.796994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.797156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.797186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.797356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.797384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.797533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.797561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.797722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.797751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.797936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.797965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.798123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.798150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.798306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.798335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.798489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.798516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.798637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.798665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.798789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.798817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.798998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.799039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.799174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.799204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.799334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.799362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.799522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.799549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.799700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.799727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.799929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.799956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.800091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.800118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.800238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.800265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.800389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.800416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.800572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.800599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.800730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.800760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.800902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.800954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.801091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.801119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.801306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.801339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.801462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.801490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.801609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.801636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.801771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.801799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.801974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.802002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.802127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.802167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.802292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.802320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.802441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.802468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.802591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.802619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.802747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.802775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.802904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.802932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.803060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.803086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.803217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.803244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.803420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.803447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.362 [2024-07-13 08:20:57.803588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.362 [2024-07-13 08:20:57.803615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.362 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.803743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.803771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.803904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.803931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.804087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.804113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.804238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.804267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.804401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.804429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.804556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.804583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.804721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.804751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.804914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.804954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.805127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.805166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.805344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.805371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.805530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.805557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.805678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.805707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.805872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.805905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.806030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.806058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.806221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.806248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.806369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.806396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.806517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.806544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.806754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.806783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.806942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.806971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.807101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.807128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.807283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.807310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.807515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.807542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.807727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.807754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.807922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.807949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.808099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.808137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.808286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.808314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.808463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.808490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.808605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.808632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.808750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.808778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.808915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.808942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.809070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.809097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.809253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.809281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.809407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.809443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.809602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.809629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.809752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.809780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.809966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.809994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.810131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.810166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.810324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.810351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.810473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.810500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.810633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.363 [2024-07-13 08:20:57.810662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.363 qpair failed and we were unable to recover it. 00:34:06.363 [2024-07-13 08:20:57.810835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.810862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.811033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.811060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.811190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.811218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.811337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.811364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.811513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.811540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.811702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.811732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.811888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.811926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.812079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.812106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.812300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.812327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.812479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.812506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.812625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.812652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.812807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.812834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.812974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.813007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.813170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.813197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.813319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.813346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.813463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.813490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.813638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.813665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.813808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.813837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.814008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.814034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.814156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.814183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.814330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.814358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.814508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.814535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.814669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.814696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.814813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.814840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.815000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.815026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.815155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.815182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.815310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.815338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.815452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.815479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.815613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.815640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.815769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.815795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.815967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.815994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.816153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.816180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.816307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.816334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.816514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.816542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.816662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.816689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.816805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.816832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.816967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.816993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.817110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.817137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.817286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.817313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.817442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.817473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.817591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.364 [2024-07-13 08:20:57.817618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.364 qpair failed and we were unable to recover it. 00:34:06.364 [2024-07-13 08:20:57.817775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.817805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.817980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.818022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.818210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.818239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.818365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.818393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.818540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.818568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.818699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.818742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.818889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.818925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.819059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.819086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.819235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.819261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.819410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.819437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.819567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.819594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.819732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.819760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.819921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.819961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.820117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.820157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.820316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.820343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.820511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.820537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.820661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.820688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.820839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.820874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.821025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.821051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.821175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.821202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.821320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.821346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.821464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.821491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.821624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.821651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.821803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.821829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.821971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.821998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.822118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.822150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.822365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.822391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.822520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.822546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.822671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.822697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.822822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.822851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.822997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.823023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.823150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.823177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.823324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.823350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.823473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.823501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.823632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.823659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.365 qpair failed and we were unable to recover it. 00:34:06.365 [2024-07-13 08:20:57.823834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.365 [2024-07-13 08:20:57.823860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.824008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.824035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.824211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.824237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.824356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.824383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.824532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.824559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.824673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.824700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.824875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.824927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.825108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.825148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.825315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.825345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.825500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.825528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.825646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.825674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.825800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.825827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.825994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.826033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.826166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.826193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.826338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.826366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.826513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.826540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.826693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.826719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.826836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.826874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.827002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.827029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.827160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.827186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.827318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.827344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.827470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.827497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.827613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.827640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.827765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.827792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.827910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.827937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.828095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.828121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.828264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.828290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.828409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.828435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.828593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.828620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.828768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.828795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.828923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.828951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.829071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.829098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.829220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.829246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.829373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.829400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.829525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.829552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.829701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.829728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.829854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.829894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.830054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.830081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.830227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.830254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.830377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.830403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.830560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.830588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.366 qpair failed and we were unable to recover it. 00:34:06.366 [2024-07-13 08:20:57.830720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.366 [2024-07-13 08:20:57.830748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.830878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.830905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.831026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.831053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.831178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.831205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.831323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.831350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.831496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.831522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.831642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.831668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.831843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.831891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.832021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.832050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.832174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.832202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.832324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.832353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.832495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.832523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.832712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.832753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.832881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.832910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.833031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.833058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.833178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.833204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.833330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.833357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.833507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.833534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.833674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.833700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.833815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.833842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.834003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.834032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.834158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.834184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.834310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.834336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.834487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.834514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.834661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.834688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.834831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.834880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.835017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.835045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.835172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.835201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.835380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.835408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.835538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.835566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.835683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.835710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.835831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.835859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.836013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.836054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.836215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.836246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.836397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.836426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.836550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.836579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.836704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.836733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.836896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.836942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.837088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.837129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.837286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.837315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.837436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.837462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.367 [2024-07-13 08:20:57.837614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.367 [2024-07-13 08:20:57.837640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.367 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.837817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.837847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.837995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.838023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.838162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.838189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.838305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.838332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.838482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.838510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.838676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.838702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.838856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.838891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.839021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.839048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.839200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.839228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.839383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.839411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.839592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.839620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.839771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.839799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.839940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.839970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.840103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.840129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.840258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.840285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.840444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.840476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.840627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.840654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.840781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.840807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.840927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.840953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.841107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.841133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.841255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.841281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.841403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.841428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.841551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.841578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.841726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.841752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.841902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.841929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.842068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.842109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.842261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.842290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.842440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.842469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.842623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.842651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.842816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.842844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.842976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.843005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.843163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.843191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.843313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.843340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.843474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.843501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.843632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.843660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.843788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.843813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.843949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.843975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.844124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.844152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.844272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.844298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.844425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.844451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.368 [2024-07-13 08:20:57.844582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.368 [2024-07-13 08:20:57.844611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.368 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.844751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.844779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.844917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.844950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.845103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.845132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.845261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.845289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.845409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.845437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.845571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.845600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.845732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.845758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.845881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.845909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.846038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.846065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.846185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.846210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.846389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.846416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.846539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.846566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.846695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.846721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.846843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.846878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.847020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.847047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.847197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.847223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.847368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.847395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.847550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.847577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.847725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.847751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.847897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.847938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.848095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.848125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.848277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.848305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.848458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.848486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.848634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.848662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.848784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.848813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.848970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.848999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.849110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.849138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.849284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.849311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.849431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.849463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.849608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.849649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.849800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.849829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.849969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.849998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.850146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.850173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.850319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.850347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.850480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.850508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.850659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.850688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.369 [2024-07-13 08:20:57.850847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.369 [2024-07-13 08:20:57.850894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.369 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.851063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.851104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.851260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.851289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.851420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.851448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.851592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.851619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.851751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.851778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.851911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.851940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.852089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.852115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.852245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.852271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.852401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.852428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.852602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.852628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.852748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.852773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.852903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.852931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.853083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.853124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.853256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.853284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.853407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.853434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.853554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.853580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.853717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.853744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.853876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.853904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.854038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.854072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.854207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.854233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.854390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.854417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.854543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.854570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.854714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.854739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.854883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.854913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.855077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.855103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.855257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.855283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.855443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.855472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.855622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.855649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.855787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.855815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.855995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.856023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.856182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.856209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.856362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.856390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.856523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.856553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.856704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.856733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.856863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.856897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.857024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.857051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.857171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.857198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.857325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.370 [2024-07-13 08:20:57.857352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.370 qpair failed and we were unable to recover it. 00:34:06.370 [2024-07-13 08:20:57.857502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.857528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.857659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.857686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.857847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.857895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.858023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.858053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.858196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.858224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.858338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.858365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.858527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.858556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.858683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.858719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.858881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.858912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.859040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.859067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.859187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.859213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.859388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.859415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.859532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.859558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.859707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.859732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.859858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.859895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.860024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.860055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.860181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.860209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.860358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.860385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.860516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.860544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.860686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.860727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.860883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.860926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.861063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.861092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.861247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.861274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.861387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.861414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.861534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.861561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.861676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.861705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.861879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.861921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.862062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.862103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.862265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.862293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.862433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.862460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.862609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.862636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.862813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.862855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.863007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.863048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.863204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.863233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.863407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.863435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.863588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.863616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.863734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.863762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.863930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.863972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.864105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.864133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.864270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.864298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.864436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.371 [2024-07-13 08:20:57.864463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.371 qpair failed and we were unable to recover it. 00:34:06.371 [2024-07-13 08:20:57.864640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.864666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.864815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.864841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.864969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.864998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.865169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.865195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.865318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.865345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.865472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.865499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.865624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.865650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.865809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.865836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.865974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.866002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.866140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.866167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.866330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.866356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.866477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.866503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.866666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.866692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.866817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.866843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.866995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.867022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.867143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.867170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.867307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.867334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.867450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.867476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.867627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.867654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.867779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.867820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.868032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.868067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.868224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.868253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.868381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.868409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.868548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.868576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.868711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.868739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.868887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.868916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.869066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.869093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.869242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.869270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.869440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.869467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.869589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.869616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.869770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.869797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.869917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.869945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.870094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.870121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.870252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.870279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.870425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.870454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.870581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.870607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.870759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.870785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.870918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.870945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.871118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.871145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.871291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.871318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.871450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.372 [2024-07-13 08:20:57.871478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.372 qpair failed and we were unable to recover it. 00:34:06.372 [2024-07-13 08:20:57.871615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.871643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.871774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.871802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.872036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.872064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.872214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.872242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.872369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.872396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.872545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.872572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.872703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.872731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.872905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.872946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.873071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.873100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.873256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.873283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.873413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.873441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.873596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.873623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.873799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.873841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:06.373 [2024-07-13 08:20:57.873974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.874004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.874136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.874166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:06.373 [2024-07-13 08:20:57.874287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.874316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.874464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.874492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:06.373 [2024-07-13 08:20:57.874624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.874652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.874777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.874811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:06.373 [2024-07-13 08:20:57.874993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.875022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.875149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:06.373 [2024-07-13 08:20:57.875177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.875328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.875356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.875491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.875518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.875657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.875697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.875858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.875894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.876021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.876048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.876165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.876191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.876347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.876374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.876504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.876530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.876662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.876690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.876806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.876842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xacd600 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.876999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.877040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.877279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.877308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.877431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.877459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.877582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.877611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.877797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.877828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fe4000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.877975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.878017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.878157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.878186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.878317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.373 [2024-07-13 08:20:57.878344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.373 qpair failed and we were unable to recover it. 00:34:06.373 [2024-07-13 08:20:57.878493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.878521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.878640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.878677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.878815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.878857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.879026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.879054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.879224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.879253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.879391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.879427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.879577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.879605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.879761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.879789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fdc000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.879919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.879950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.880079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.880105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.880259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.880285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.880406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.880432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.880555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.880581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.880755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.880782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.880925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.880954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.881092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.881119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.881269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.881305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.881462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.881489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.881604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.881631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.881754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.881781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.881942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.881969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.882119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.882147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.882302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.882330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.882462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.882490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.882612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.882639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.882769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.882795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.882910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.882937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.883061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.883089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.883241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.883268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.883397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.883423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.883545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.883572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.883691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.883718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.883846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.883879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.884040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.884067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.884221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.374 [2024-07-13 08:20:57.884247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.374 qpair failed and we were unable to recover it. 00:34:06.374 [2024-07-13 08:20:57.884394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.884421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.884543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.884571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.884700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.884728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.884850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.884887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.885047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.885074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.885237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.885263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.885416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.885443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.885591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.885618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.885746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.885773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.885930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.885956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.886089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.886131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.886262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.886297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.886438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.886465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.886614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.886640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.886771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.886797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.886948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.886975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.887099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.887127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.887274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.887301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.887432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.887462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.887614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.887646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.887810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.887837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.887974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.888000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.888131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.888159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.888283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.888310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.888473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.888501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.888662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.888690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.888812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.888840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.888986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.889014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.889142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.889172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.889293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.889320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.889493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.889520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.889649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.889676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.889822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.889850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.889994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.890022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.890151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.890177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.890299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.890325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.890473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.890500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.890626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.890654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.890775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.890802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.890955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.890982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.891126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.375 [2024-07-13 08:20:57.891153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.375 qpair failed and we were unable to recover it. 00:34:06.375 [2024-07-13 08:20:57.891311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.891338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.891477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.891503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.891620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.891646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.891798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.891826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.891963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.891990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.892110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.892137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.892250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.892277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.892422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.892449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.892564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.892590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.892718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.892749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.892900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.892927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.893053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.893080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.893199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.893225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.893347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.893373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.893485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.893511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.893663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.893689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.893813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.893840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.893968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.893996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.894152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.894178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.894292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.894320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.894467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.894492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.894686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.894715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.894844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.894882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.895048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.895076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.895212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.895239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.895356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.895383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.895509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.895536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.895656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.895682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.895805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.895831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.895991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.896018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.896143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.896170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.896292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.896318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.896441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.896467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.896593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.896627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.896752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.896778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.896917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.896944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.897072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.897098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.897274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.897300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.897427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.897454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.897605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.897632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.376 [2024-07-13 08:20:57.897784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.376 [2024-07-13 08:20:57.897810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.376 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.897955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.897982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.898104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.898132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.898292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.898319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.898439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.898464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.898588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.898614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.898761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.898787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.898950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.898977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.899103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.899129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.899251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.899281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.899427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.899453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:06.377 [2024-07-13 08:20:57.899616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.899646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.899766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.899792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.899939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:06.377 [2024-07-13 08:20:57.899966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.900126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.377 [2024-07-13 08:20:57.900163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.900317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.900344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:06.377 [2024-07-13 08:20:57.900484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.900512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.900634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.900661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.900789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.900815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.900981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.901008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.901136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.901162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.901312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.901338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.901482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.901509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.901657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.901683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.901807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.901833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.901979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.902005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.902151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.902177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.902301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.902327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.902440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.902466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.902596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.902623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.902797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.902823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.902955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.902980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.903102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.903128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.903285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.903313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.903433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.903463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.903595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.903621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.903748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.903773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.903896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.903934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.904065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.904093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.377 [2024-07-13 08:20:57.904277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.377 [2024-07-13 08:20:57.904304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.377 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.904415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.904441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.904584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.904610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.904729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.904756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.904874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.904901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.905048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.905074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.905207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.905234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.905379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.905406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.905536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.905562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.905710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.905737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.905873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.905901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.906030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.906055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.906204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.906231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.906353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.906378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.906498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.906525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.906675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.906702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.906826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.906852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.906994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.907021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.907181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.907207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.907335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.907362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.907482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.907509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.907633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.907659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.907796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.907823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.908039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.908066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.908219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.908247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.908370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.908396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.908512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.908539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.908685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.908711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.908890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.908916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.909069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.909094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.909224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.909251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.909370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.909395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.909522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.909549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.909673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.909700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.909901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.378 [2024-07-13 08:20:57.909932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.378 qpair failed and we were unable to recover it. 00:34:06.378 [2024-07-13 08:20:57.910107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.910151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.910293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.910320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.910448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.910475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.910626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.910652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.910783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.910810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.910978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.911006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.911134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.911161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.911287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.911313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.911466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.911493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.911713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.911740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.911855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.911887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.912038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.912064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.912193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.912219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.912339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.912366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.912505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.912533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.912684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.912710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.912833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.912860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.913001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.913028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.913178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.913205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.913356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.913384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.913508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.913536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.913708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.913736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.913871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.913898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.914069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.914095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.914260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.914287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.914408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.914435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.914596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.914623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.914778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.914806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.914987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.915014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.915147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.915174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.915305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.915335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.915466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.915492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.915646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.915673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.915795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.915822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.915952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.915979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.916105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.916132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.916282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.916309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.916440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.916467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.916593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.916620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.916775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.916801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.379 [2024-07-13 08:20:57.916969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.379 [2024-07-13 08:20:57.916999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.379 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.917125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.917154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.917303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.917329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.917483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.917509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.917627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.917654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.917767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.917794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.917948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.917974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.918097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.918124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.918259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.918286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.918433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.918461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.918588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.918615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.918765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.918791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.918945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.918973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.919088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.919114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.919287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.919314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.919442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.919469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.919598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.919625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.919774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.919800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.919950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.919978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.920103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.920131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.920279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.920305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.920439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.920466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.920593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.920619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.920775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.920802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.920930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.920956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.921073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.921098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.921248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.921275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.921437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.921465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.921599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.921625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.921746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.921773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.921930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.921956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.922085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.922110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.922268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.922295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.922423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.922449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.922631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.922658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.922793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.922820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.922979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.923006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.923137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.923173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.923303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.923330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.923475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.923502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.923658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.923689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.380 [2024-07-13 08:20:57.923834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.380 [2024-07-13 08:20:57.923861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.380 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.924006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.924032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.924158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.924184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.924338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.924365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.924491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.924519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.924639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.924667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.924810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.924838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.924990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.925017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.925146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.925173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.925332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.925358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.925490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.925516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.925649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.925676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.925798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.925824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.925987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.926014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.926140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.926167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.926314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.926341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.926466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.926493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.926651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.926678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.926856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.926910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.927037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.927062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.927185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.927212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.927337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.927363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.927500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.927527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.927689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.927717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 Malloc0 00:34:06.381 [2024-07-13 08:20:57.927848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.927993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.928124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.928156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.928309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:06.381 [2024-07-13 08:20:57.928337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.381 [2024-07-13 08:20:57.928455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.928485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.928607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:06.381 [2024-07-13 08:20:57.928635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.928766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.928795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.928927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.928954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.929082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.929109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.929250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.929279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.929430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.929457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.929617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.929644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.929798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.929825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.929951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.929979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.930139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.930168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.930302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.930330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.381 [2024-07-13 08:20:57.930454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.381 [2024-07-13 08:20:57.930481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.381 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.930599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.930627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.930778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.930805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.930948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.930975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.931109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.931147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.931299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.931326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.931423] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:06.382 [2024-07-13 08:20:57.931448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.931474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.931588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.931613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.931749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.931777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.931902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.931929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.932072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.932098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.932247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.932274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.932400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.932428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.932606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.932632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.932776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.932803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.932932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.932959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.933077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.933104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.933236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.933263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.933413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.933440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.933613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.933641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.933762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.933790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.933915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.933943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.934081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.934108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.934228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.934256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.934437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.934464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.934592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.934619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.934737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.934764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.934893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.934923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.935057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.935085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.935230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.935266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.935419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.935446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.935579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.935606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.935761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.935788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.935945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.935972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.936109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.382 [2024-07-13 08:20:57.936136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.382 qpair failed and we were unable to recover it. 00:34:06.382 [2024-07-13 08:20:57.936287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.936314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.936469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.936496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.936618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.936646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.936762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.936793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.936944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.936971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.937123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.937161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.937275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.937301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.937465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.937492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.937638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.937665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.937817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.937844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.937990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.938017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.938136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.938165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.938311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.938338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.938451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.938478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.938640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.938667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.938788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.938815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.938972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.938999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.939128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.939155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.939302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.939329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.939457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.939483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.939599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.939626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.383 [2024-07-13 08:20:57.939747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.939774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:06.383 [2024-07-13 08:20:57.939895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.383 [2024-07-13 08:20:57.939952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:06.383 [2024-07-13 08:20:57.940074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.940101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.940233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.940260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.940374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.940401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.940543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.940571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.940707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.940734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.940854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.940893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.941071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.941097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.941221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.941248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.941369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.941397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.941535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.941562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.941709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.941736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.941886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.941924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.942053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.942080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.942226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.942252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.942371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.942398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.942542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.942570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.383 qpair failed and we were unable to recover it. 00:34:06.383 [2024-07-13 08:20:57.942745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.383 [2024-07-13 08:20:57.942771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.942897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.942923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.943051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.943077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.943208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.943235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.943352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.943378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.943533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.943560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.943708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.943735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.943857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.943891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.944028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.944054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.944178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.944205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.944324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.944351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.944508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.944536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.944654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.944682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.944836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.944863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.945008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.945035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.945151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.945178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.945328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.945355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.945480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.945507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.945656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.945683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.945813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.945841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.946001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.946028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.946147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.946175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.946299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.946327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.946481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.946508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.946623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.946650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.946777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.946804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.946929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.946957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.947089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.947116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.947240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.947267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.947382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.947413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.947530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.947557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.947701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.384 [2024-07-13 08:20:57.947729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:06.384 [2024-07-13 08:20:57.947875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.947902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.384 [2024-07-13 08:20:57.948019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.948045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:06.384 [2024-07-13 08:20:57.948175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.948203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.948337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.948364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.948481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.948508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.948657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.948684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.948812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.384 [2024-07-13 08:20:57.948840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.384 qpair failed and we were unable to recover it. 00:34:06.384 [2024-07-13 08:20:57.948973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.949000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.949153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.949181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.949333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.949361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.949504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.949531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.949646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.949673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.949797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.949824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.949976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.950003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.950128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.950156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.950281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.950308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.950481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.950509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.950640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.950667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.950790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.950818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.950970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.950997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.951110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.951141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.951267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.951294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.951421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.951452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.951601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.951628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.951788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.951815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.951966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.951993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.952147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.952174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.952304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.952332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.952469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.952496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.952629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.952656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.952792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.952821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.952953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.952982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.953103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.953131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.953281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.953308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.953471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.953498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.953652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.953680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.953805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.953833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.953966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.953993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.954116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.954143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.954260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.954287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.954460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.954487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.954631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.954658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.954834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.954861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.954991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.955019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.955170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.955198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.955332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.955359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.955505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.955532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.385 [2024-07-13 08:20:57.955651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.385 [2024-07-13 08:20:57.955678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.385 qpair failed and we were unable to recover it. 00:34:06.386 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.386 [2024-07-13 08:20:57.955824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.955851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:06.386 [2024-07-13 08:20:57.956018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.386 [2024-07-13 08:20:57.956046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:06.386 [2024-07-13 08:20:57.956182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.956209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.956356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.956383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.956505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.956532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.956660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.956687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.956812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.956839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.956987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.957015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.957144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.957171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.957324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.957352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.957507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.957535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.957664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.957692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.957818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.957845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.957984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.958011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.958170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.958198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.958353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.958382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.958506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.958534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.958660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.958687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.958807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.958835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.958975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.959003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.959131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.959158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.959286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.959313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.959463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:06.386 [2024-07-13 08:20:57.959490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f8fec000b90 with addr=10.0.0.2, port=4420 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.959763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:06.386 [2024-07-13 08:20:57.962158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.386 [2024-07-13 08:20:57.962313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.386 [2024-07-13 08:20:57.962342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.386 [2024-07-13 08:20:57.962361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.386 [2024-07-13 08:20:57.962374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.386 [2024-07-13 08:20:57.962415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.386 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:06.386 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:06.386 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:06.386 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:06.386 08:20:57 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2114160 00:34:06.386 [2024-07-13 08:20:57.972028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.386 [2024-07-13 08:20:57.972176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.386 [2024-07-13 08:20:57.972204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.386 [2024-07-13 08:20:57.972220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.386 [2024-07-13 08:20:57.972233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.386 [2024-07-13 08:20:57.972263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.982114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.386 [2024-07-13 08:20:57.982296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.386 [2024-07-13 08:20:57.982325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.386 [2024-07-13 08:20:57.982359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.386 [2024-07-13 08:20:57.982373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.386 [2024-07-13 08:20:57.982419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:57.992086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.386 [2024-07-13 08:20:57.992222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.386 [2024-07-13 08:20:57.992250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.386 [2024-07-13 08:20:57.992265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.386 [2024-07-13 08:20:57.992279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.386 [2024-07-13 08:20:57.992324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.386 qpair failed and we were unable to recover it. 00:34:06.386 [2024-07-13 08:20:58.002090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.386 [2024-07-13 08:20:58.002214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.386 [2024-07-13 08:20:58.002241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.386 [2024-07-13 08:20:58.002264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.386 [2024-07-13 08:20:58.002280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.387 [2024-07-13 08:20:58.002312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.387 qpair failed and we were unable to recover it. 00:34:06.387 [2024-07-13 08:20:58.012048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.387 [2024-07-13 08:20:58.012171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.387 [2024-07-13 08:20:58.012198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.387 [2024-07-13 08:20:58.012214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.387 [2024-07-13 08:20:58.012227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.387 [2024-07-13 08:20:58.012256] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.387 qpair failed and we were unable to recover it. 00:34:06.387 [2024-07-13 08:20:58.022143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.387 [2024-07-13 08:20:58.022267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.387 [2024-07-13 08:20:58.022295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.387 [2024-07-13 08:20:58.022310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.387 [2024-07-13 08:20:58.022324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.387 [2024-07-13 08:20:58.022354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.387 qpair failed and we were unable to recover it. 00:34:06.387 [2024-07-13 08:20:58.032116] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.387 [2024-07-13 08:20:58.032250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.387 [2024-07-13 08:20:58.032278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.387 [2024-07-13 08:20:58.032294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.387 [2024-07-13 08:20:58.032307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.387 [2024-07-13 08:20:58.032338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.387 qpair failed and we were unable to recover it. 00:34:06.387 [2024-07-13 08:20:58.042088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.387 [2024-07-13 08:20:58.042218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.387 [2024-07-13 08:20:58.042245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.387 [2024-07-13 08:20:58.042259] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.387 [2024-07-13 08:20:58.042272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.387 [2024-07-13 08:20:58.042304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.387 qpair failed and we were unable to recover it. 00:34:06.387 [2024-07-13 08:20:58.052099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.387 [2024-07-13 08:20:58.052230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.387 [2024-07-13 08:20:58.052257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.387 [2024-07-13 08:20:58.052272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.387 [2024-07-13 08:20:58.052285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.387 [2024-07-13 08:20:58.052317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.387 qpair failed and we were unable to recover it. 00:34:06.387 [2024-07-13 08:20:58.062184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.387 [2024-07-13 08:20:58.062329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.387 [2024-07-13 08:20:58.062357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.387 [2024-07-13 08:20:58.062373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.387 [2024-07-13 08:20:58.062387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.387 [2024-07-13 08:20:58.062419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.387 qpair failed and we were unable to recover it. 00:34:06.649 [2024-07-13 08:20:58.072259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.649 [2024-07-13 08:20:58.072390] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.649 [2024-07-13 08:20:58.072417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.649 [2024-07-13 08:20:58.072434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.649 [2024-07-13 08:20:58.072462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.649 [2024-07-13 08:20:58.072505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.649 qpair failed and we were unable to recover it. 00:34:06.649 [2024-07-13 08:20:58.082326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.649 [2024-07-13 08:20:58.082457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.649 [2024-07-13 08:20:58.082484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.649 [2024-07-13 08:20:58.082500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.649 [2024-07-13 08:20:58.082514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.649 [2024-07-13 08:20:58.082560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.649 qpair failed and we were unable to recover it. 00:34:06.649 [2024-07-13 08:20:58.092254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.649 [2024-07-13 08:20:58.092381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.649 [2024-07-13 08:20:58.092413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.649 [2024-07-13 08:20:58.092429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.649 [2024-07-13 08:20:58.092442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.649 [2024-07-13 08:20:58.092472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.649 qpair failed and we were unable to recover it. 00:34:06.649 [2024-07-13 08:20:58.102334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.649 [2024-07-13 08:20:58.102466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.649 [2024-07-13 08:20:58.102492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.649 [2024-07-13 08:20:58.102507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.650 [2024-07-13 08:20:58.102521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.650 [2024-07-13 08:20:58.102550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.650 qpair failed and we were unable to recover it. 00:34:06.650 [2024-07-13 08:20:58.112432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.650 [2024-07-13 08:20:58.112564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.650 [2024-07-13 08:20:58.112591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.650 [2024-07-13 08:20:58.112606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.650 [2024-07-13 08:20:58.112620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.650 [2024-07-13 08:20:58.112666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.650 qpair failed and we were unable to recover it. 00:34:06.650 [2024-07-13 08:20:58.122390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.650 [2024-07-13 08:20:58.122533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.650 [2024-07-13 08:20:58.122560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.650 [2024-07-13 08:20:58.122576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.650 [2024-07-13 08:20:58.122589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.650 [2024-07-13 08:20:58.122631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.650 qpair failed and we were unable to recover it. 00:34:06.650 [2024-07-13 08:20:58.132427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.650 [2024-07-13 08:20:58.132559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.650 [2024-07-13 08:20:58.132587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.650 [2024-07-13 08:20:58.132602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.650 [2024-07-13 08:20:58.132616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.650 [2024-07-13 08:20:58.132652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.650 qpair failed and we were unable to recover it. 00:34:06.650 [2024-07-13 08:20:58.142516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.650 [2024-07-13 08:20:58.142643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.650 [2024-07-13 08:20:58.142669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.650 [2024-07-13 08:20:58.142684] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.650 [2024-07-13 08:20:58.142698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.650 [2024-07-13 08:20:58.142727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.650 qpair failed and we were unable to recover it. 00:34:06.650 [2024-07-13 08:20:58.152413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.650 [2024-07-13 08:20:58.152547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.650 [2024-07-13 08:20:58.152574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.650 [2024-07-13 08:20:58.152589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.650 [2024-07-13 08:20:58.152602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.650 [2024-07-13 08:20:58.152632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.650 qpair failed and we were unable to recover it. 00:34:06.650 [2024-07-13 08:20:58.162471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.650 [2024-07-13 08:20:58.162601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.650 [2024-07-13 08:20:58.162628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.650 [2024-07-13 08:20:58.162643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.650 [2024-07-13 08:20:58.162656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.650 [2024-07-13 08:20:58.162686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.650 qpair failed and we were unable to recover it. 00:34:06.650 [2024-07-13 08:20:58.172591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.650 [2024-07-13 08:20:58.172711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.650 [2024-07-13 08:20:58.172735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.650 [2024-07-13 08:20:58.172750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.650 [2024-07-13 08:20:58.172764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.650 [2024-07-13 08:20:58.172795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.650 qpair failed and we were unable to recover it. 00:34:06.650 [2024-07-13 08:20:58.182533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.650 [2024-07-13 08:20:58.182657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.650 [2024-07-13 08:20:58.182688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.650 [2024-07-13 08:20:58.182704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.650 [2024-07-13 08:20:58.182717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.650 [2024-07-13 08:20:58.182746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.650 qpair failed and we were unable to recover it. 00:34:06.650 [2024-07-13 08:20:58.192542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.650 [2024-07-13 08:20:58.192669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.650 [2024-07-13 08:20:58.192697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.650 [2024-07-13 08:20:58.192712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.650 [2024-07-13 08:20:58.192726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.650 [2024-07-13 08:20:58.192755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.650 qpair failed and we were unable to recover it. 00:34:06.650 [2024-07-13 08:20:58.202620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.650 [2024-07-13 08:20:58.202792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.650 [2024-07-13 08:20:58.202820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.650 [2024-07-13 08:20:58.202835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.650 [2024-07-13 08:20:58.202849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.650 [2024-07-13 08:20:58.202899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.650 qpair failed and we were unable to recover it. 00:34:06.650 [2024-07-13 08:20:58.212630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.650 [2024-07-13 08:20:58.212770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.650 [2024-07-13 08:20:58.212797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.650 [2024-07-13 08:20:58.212813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.650 [2024-07-13 08:20:58.212826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.650 [2024-07-13 08:20:58.212856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.650 qpair failed and we were unable to recover it. 00:34:06.650 [2024-07-13 08:20:58.222663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.650 [2024-07-13 08:20:58.222796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.650 [2024-07-13 08:20:58.222834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.650 [2024-07-13 08:20:58.222852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.650 [2024-07-13 08:20:58.222878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.650 [2024-07-13 08:20:58.222913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.650 qpair failed and we were unable to recover it. 00:34:06.650 [2024-07-13 08:20:58.232755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.650 [2024-07-13 08:20:58.232889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.650 [2024-07-13 08:20:58.232916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.650 [2024-07-13 08:20:58.232932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.650 [2024-07-13 08:20:58.232946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.650 [2024-07-13 08:20:58.232977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.650 qpair failed and we were unable to recover it. 00:34:06.650 [2024-07-13 08:20:58.242735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.650 [2024-07-13 08:20:58.242863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.650 [2024-07-13 08:20:58.242895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.650 [2024-07-13 08:20:58.242910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.650 [2024-07-13 08:20:58.242923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.650 [2024-07-13 08:20:58.242954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.651 qpair failed and we were unable to recover it. 00:34:06.651 [2024-07-13 08:20:58.252720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.651 [2024-07-13 08:20:58.252843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.651 [2024-07-13 08:20:58.252876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.651 [2024-07-13 08:20:58.252893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.651 [2024-07-13 08:20:58.252906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.651 [2024-07-13 08:20:58.252937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.651 qpair failed and we were unable to recover it. 00:34:06.651 [2024-07-13 08:20:58.262793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.651 [2024-07-13 08:20:58.262926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.651 [2024-07-13 08:20:58.262952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.651 [2024-07-13 08:20:58.262970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.651 [2024-07-13 08:20:58.262984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.651 [2024-07-13 08:20:58.263015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.651 qpair failed and we were unable to recover it. 00:34:06.651 [2024-07-13 08:20:58.272753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.651 [2024-07-13 08:20:58.272899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.651 [2024-07-13 08:20:58.272925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.651 [2024-07-13 08:20:58.272940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.651 [2024-07-13 08:20:58.272953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.651 [2024-07-13 08:20:58.272985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.651 qpair failed and we were unable to recover it. 00:34:06.651 [2024-07-13 08:20:58.282790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.651 [2024-07-13 08:20:58.282931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.651 [2024-07-13 08:20:58.282959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.651 [2024-07-13 08:20:58.282975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.651 [2024-07-13 08:20:58.282988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.651 [2024-07-13 08:20:58.283019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.651 qpair failed and we were unable to recover it. 00:34:06.651 [2024-07-13 08:20:58.292806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.651 [2024-07-13 08:20:58.292947] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.651 [2024-07-13 08:20:58.292974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.651 [2024-07-13 08:20:58.292990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.651 [2024-07-13 08:20:58.293003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.651 [2024-07-13 08:20:58.293034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.651 qpair failed and we were unable to recover it. 00:34:06.651 [2024-07-13 08:20:58.302862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.651 [2024-07-13 08:20:58.303028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.651 [2024-07-13 08:20:58.303059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.651 [2024-07-13 08:20:58.303077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.651 [2024-07-13 08:20:58.303090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.651 [2024-07-13 08:20:58.303122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.651 qpair failed and we were unable to recover it. 00:34:06.651 [2024-07-13 08:20:58.312923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.651 [2024-07-13 08:20:58.313052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.651 [2024-07-13 08:20:58.313080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.651 [2024-07-13 08:20:58.313096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.651 [2024-07-13 08:20:58.313115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.651 [2024-07-13 08:20:58.313159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.651 qpair failed and we were unable to recover it. 00:34:06.651 [2024-07-13 08:20:58.322907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.651 [2024-07-13 08:20:58.323037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.651 [2024-07-13 08:20:58.323065] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.651 [2024-07-13 08:20:58.323081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.651 [2024-07-13 08:20:58.323094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.651 [2024-07-13 08:20:58.323125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.651 qpair failed and we were unable to recover it. 00:34:06.651 [2024-07-13 08:20:58.332973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.651 [2024-07-13 08:20:58.333102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.651 [2024-07-13 08:20:58.333129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.651 [2024-07-13 08:20:58.333144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.651 [2024-07-13 08:20:58.333158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.651 [2024-07-13 08:20:58.333189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.651 qpair failed and we were unable to recover it. 00:34:06.651 [2024-07-13 08:20:58.342973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.651 [2024-07-13 08:20:58.343100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.651 [2024-07-13 08:20:58.343126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.651 [2024-07-13 08:20:58.343141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.651 [2024-07-13 08:20:58.343154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.651 [2024-07-13 08:20:58.343186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.651 qpair failed and we were unable to recover it. 00:34:06.651 [2024-07-13 08:20:58.353095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.651 [2024-07-13 08:20:58.353224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.651 [2024-07-13 08:20:58.353249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.651 [2024-07-13 08:20:58.353264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.651 [2024-07-13 08:20:58.353277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.651 [2024-07-13 08:20:58.353307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.651 qpair failed and we were unable to recover it. 00:34:06.651 [2024-07-13 08:20:58.363110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.651 [2024-07-13 08:20:58.363245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.651 [2024-07-13 08:20:58.363272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.651 [2024-07-13 08:20:58.363287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.651 [2024-07-13 08:20:58.363300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.651 [2024-07-13 08:20:58.363330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.651 qpair failed and we were unable to recover it. 00:34:06.651 [2024-07-13 08:20:58.373041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.651 [2024-07-13 08:20:58.373157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.651 [2024-07-13 08:20:58.373183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.651 [2024-07-13 08:20:58.373198] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.651 [2024-07-13 08:20:58.373211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.651 [2024-07-13 08:20:58.373242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.651 qpair failed and we were unable to recover it. 00:34:06.911 [2024-07-13 08:20:58.383081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.911 [2024-07-13 08:20:58.383205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.911 [2024-07-13 08:20:58.383232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.911 [2024-07-13 08:20:58.383247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.911 [2024-07-13 08:20:58.383261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.911 [2024-07-13 08:20:58.383291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.911 qpair failed and we were unable to recover it. 00:34:06.911 [2024-07-13 08:20:58.393206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.911 [2024-07-13 08:20:58.393332] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.911 [2024-07-13 08:20:58.393359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.912 [2024-07-13 08:20:58.393375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.912 [2024-07-13 08:20:58.393388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.912 [2024-07-13 08:20:58.393420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.912 qpair failed and we were unable to recover it. 00:34:06.912 [2024-07-13 08:20:58.403158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.912 [2024-07-13 08:20:58.403330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.912 [2024-07-13 08:20:58.403358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.912 [2024-07-13 08:20:58.403384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.912 [2024-07-13 08:20:58.403399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.912 [2024-07-13 08:20:58.403432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.912 qpair failed and we were unable to recover it. 00:34:06.912 [2024-07-13 08:20:58.413158] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.912 [2024-07-13 08:20:58.413280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.912 [2024-07-13 08:20:58.413308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.912 [2024-07-13 08:20:58.413323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.912 [2024-07-13 08:20:58.413336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.912 [2024-07-13 08:20:58.413367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.912 qpair failed and we were unable to recover it. 00:34:06.912 [2024-07-13 08:20:58.423187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.912 [2024-07-13 08:20:58.423305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.912 [2024-07-13 08:20:58.423330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.912 [2024-07-13 08:20:58.423345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.912 [2024-07-13 08:20:58.423358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.912 [2024-07-13 08:20:58.423388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.912 qpair failed and we were unable to recover it. 00:34:06.912 [2024-07-13 08:20:58.433220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.912 [2024-07-13 08:20:58.433343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.912 [2024-07-13 08:20:58.433370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.912 [2024-07-13 08:20:58.433386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.912 [2024-07-13 08:20:58.433399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.912 [2024-07-13 08:20:58.433430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.912 qpair failed and we were unable to recover it. 00:34:06.912 [2024-07-13 08:20:58.443240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.912 [2024-07-13 08:20:58.443384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.912 [2024-07-13 08:20:58.443411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.912 [2024-07-13 08:20:58.443426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.912 [2024-07-13 08:20:58.443438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.912 [2024-07-13 08:20:58.443468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.912 qpair failed and we were unable to recover it. 00:34:06.912 [2024-07-13 08:20:58.453274] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.912 [2024-07-13 08:20:58.453399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.912 [2024-07-13 08:20:58.453427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.912 [2024-07-13 08:20:58.453442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.912 [2024-07-13 08:20:58.453456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.912 [2024-07-13 08:20:58.453486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.912 qpair failed and we were unable to recover it. 00:34:06.912 [2024-07-13 08:20:58.463311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.912 [2024-07-13 08:20:58.463429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.912 [2024-07-13 08:20:58.463462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.912 [2024-07-13 08:20:58.463477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.912 [2024-07-13 08:20:58.463490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.912 [2024-07-13 08:20:58.463520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.912 qpair failed and we were unable to recover it. 00:34:06.912 [2024-07-13 08:20:58.473335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.912 [2024-07-13 08:20:58.473467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.912 [2024-07-13 08:20:58.473494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.912 [2024-07-13 08:20:58.473511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.912 [2024-07-13 08:20:58.473524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.912 [2024-07-13 08:20:58.473554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.912 qpair failed and we were unable to recover it. 00:34:06.912 [2024-07-13 08:20:58.483360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.912 [2024-07-13 08:20:58.483480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.912 [2024-07-13 08:20:58.483508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.912 [2024-07-13 08:20:58.483524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.912 [2024-07-13 08:20:58.483537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.912 [2024-07-13 08:20:58.483567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.912 qpair failed and we were unable to recover it. 00:34:06.912 [2024-07-13 08:20:58.493479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.912 [2024-07-13 08:20:58.493629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.912 [2024-07-13 08:20:58.493661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.912 [2024-07-13 08:20:58.493681] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.912 [2024-07-13 08:20:58.493695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.912 [2024-07-13 08:20:58.493739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.912 qpair failed and we were unable to recover it. 00:34:06.912 [2024-07-13 08:20:58.503415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.912 [2024-07-13 08:20:58.503534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.912 [2024-07-13 08:20:58.503560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.912 [2024-07-13 08:20:58.503575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.912 [2024-07-13 08:20:58.503588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.912 [2024-07-13 08:20:58.503620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.912 qpair failed and we were unable to recover it. 00:34:06.912 [2024-07-13 08:20:58.513439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.912 [2024-07-13 08:20:58.513577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.912 [2024-07-13 08:20:58.513605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.912 [2024-07-13 08:20:58.513621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.912 [2024-07-13 08:20:58.513634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.912 [2024-07-13 08:20:58.513664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.912 qpair failed and we were unable to recover it. 00:34:06.912 [2024-07-13 08:20:58.523498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.912 [2024-07-13 08:20:58.523634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.912 [2024-07-13 08:20:58.523660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.912 [2024-07-13 08:20:58.523676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.912 [2024-07-13 08:20:58.523689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.912 [2024-07-13 08:20:58.523720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.912 qpair failed and we were unable to recover it. 00:34:06.912 [2024-07-13 08:20:58.533498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.912 [2024-07-13 08:20:58.533620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.912 [2024-07-13 08:20:58.533647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.912 [2024-07-13 08:20:58.533662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.912 [2024-07-13 08:20:58.533674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.913 [2024-07-13 08:20:58.533708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.913 qpair failed and we were unable to recover it. 00:34:06.913 [2024-07-13 08:20:58.543519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.913 [2024-07-13 08:20:58.543638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.913 [2024-07-13 08:20:58.543664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.913 [2024-07-13 08:20:58.543680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.913 [2024-07-13 08:20:58.543694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.913 [2024-07-13 08:20:58.543723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.913 qpair failed and we were unable to recover it. 00:34:06.913 [2024-07-13 08:20:58.553567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.913 [2024-07-13 08:20:58.553707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.913 [2024-07-13 08:20:58.553733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.913 [2024-07-13 08:20:58.553748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.913 [2024-07-13 08:20:58.553761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.913 [2024-07-13 08:20:58.553791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.913 qpair failed and we were unable to recover it. 00:34:06.913 [2024-07-13 08:20:58.563634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.913 [2024-07-13 08:20:58.563786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.913 [2024-07-13 08:20:58.563814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.913 [2024-07-13 08:20:58.563830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.913 [2024-07-13 08:20:58.563843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.913 [2024-07-13 08:20:58.563881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.913 qpair failed and we were unable to recover it. 00:34:06.913 [2024-07-13 08:20:58.573614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.913 [2024-07-13 08:20:58.573736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.913 [2024-07-13 08:20:58.573760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.913 [2024-07-13 08:20:58.573774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.913 [2024-07-13 08:20:58.573787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.913 [2024-07-13 08:20:58.573816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.913 qpair failed and we were unable to recover it. 00:34:06.913 [2024-07-13 08:20:58.583676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.913 [2024-07-13 08:20:58.583800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.913 [2024-07-13 08:20:58.583831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.913 [2024-07-13 08:20:58.583847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.913 [2024-07-13 08:20:58.583861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.913 [2024-07-13 08:20:58.583901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.913 qpair failed and we were unable to recover it. 00:34:06.913 [2024-07-13 08:20:58.593755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.913 [2024-07-13 08:20:58.593887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.913 [2024-07-13 08:20:58.593914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.913 [2024-07-13 08:20:58.593929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.913 [2024-07-13 08:20:58.593942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.913 [2024-07-13 08:20:58.593972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.913 qpair failed and we were unable to recover it. 00:34:06.913 [2024-07-13 08:20:58.603685] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.913 [2024-07-13 08:20:58.603809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.913 [2024-07-13 08:20:58.603835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.913 [2024-07-13 08:20:58.603850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.913 [2024-07-13 08:20:58.603863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.913 [2024-07-13 08:20:58.603904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.913 qpair failed and we were unable to recover it. 00:34:06.913 [2024-07-13 08:20:58.613760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.913 [2024-07-13 08:20:58.613885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.913 [2024-07-13 08:20:58.613912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.913 [2024-07-13 08:20:58.613926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.913 [2024-07-13 08:20:58.613939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.913 [2024-07-13 08:20:58.613970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.913 qpair failed and we were unable to recover it. 00:34:06.913 [2024-07-13 08:20:58.623746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.913 [2024-07-13 08:20:58.623921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.913 [2024-07-13 08:20:58.623947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.913 [2024-07-13 08:20:58.623962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.913 [2024-07-13 08:20:58.623975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.913 [2024-07-13 08:20:58.624011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.913 qpair failed and we were unable to recover it. 00:34:06.913 [2024-07-13 08:20:58.633795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:06.913 [2024-07-13 08:20:58.633936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:06.913 [2024-07-13 08:20:58.633963] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:06.913 [2024-07-13 08:20:58.633978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:06.913 [2024-07-13 08:20:58.633991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:06.913 [2024-07-13 08:20:58.634021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:06.913 qpair failed and we were unable to recover it. 00:34:07.173 [2024-07-13 08:20:58.643822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.173 [2024-07-13 08:20:58.643955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.173 [2024-07-13 08:20:58.643981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.173 [2024-07-13 08:20:58.643996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.173 [2024-07-13 08:20:58.644011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.173 [2024-07-13 08:20:58.644041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.173 qpair failed and we were unable to recover it. 00:34:07.173 [2024-07-13 08:20:58.653846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.173 [2024-07-13 08:20:58.653978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.173 [2024-07-13 08:20:58.654005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.173 [2024-07-13 08:20:58.654020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.173 [2024-07-13 08:20:58.654033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.173 [2024-07-13 08:20:58.654064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.173 qpair failed and we were unable to recover it. 00:34:07.173 [2024-07-13 08:20:58.663844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.173 [2024-07-13 08:20:58.663974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.173 [2024-07-13 08:20:58.664001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.173 [2024-07-13 08:20:58.664016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.173 [2024-07-13 08:20:58.664029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.173 [2024-07-13 08:20:58.664060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.173 qpair failed and we were unable to recover it. 00:34:07.173 [2024-07-13 08:20:58.673897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.173 [2024-07-13 08:20:58.674035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.173 [2024-07-13 08:20:58.674061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.173 [2024-07-13 08:20:58.674076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.173 [2024-07-13 08:20:58.674089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.173 [2024-07-13 08:20:58.674120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.173 qpair failed and we were unable to recover it. 00:34:07.173 [2024-07-13 08:20:58.684002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.173 [2024-07-13 08:20:58.684123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.173 [2024-07-13 08:20:58.684150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.173 [2024-07-13 08:20:58.684165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.173 [2024-07-13 08:20:58.684178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.173 [2024-07-13 08:20:58.684208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.173 qpair failed and we were unable to recover it. 00:34:07.173 [2024-07-13 08:20:58.693952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.173 [2024-07-13 08:20:58.694079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.173 [2024-07-13 08:20:58.694104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.173 [2024-07-13 08:20:58.694119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.173 [2024-07-13 08:20:58.694132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.173 [2024-07-13 08:20:58.694178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.173 qpair failed and we were unable to recover it. 00:34:07.173 [2024-07-13 08:20:58.703985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.173 [2024-07-13 08:20:58.704112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.173 [2024-07-13 08:20:58.704138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.173 [2024-07-13 08:20:58.704153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.173 [2024-07-13 08:20:58.704166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.173 [2024-07-13 08:20:58.704197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.173 qpair failed and we were unable to recover it. 00:34:07.173 [2024-07-13 08:20:58.714029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.173 [2024-07-13 08:20:58.714152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.173 [2024-07-13 08:20:58.714177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.173 [2024-07-13 08:20:58.714193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.173 [2024-07-13 08:20:58.714212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.173 [2024-07-13 08:20:58.714243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.174 qpair failed and we were unable to recover it. 00:34:07.174 [2024-07-13 08:20:58.724155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.174 [2024-07-13 08:20:58.724291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.174 [2024-07-13 08:20:58.724318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.174 [2024-07-13 08:20:58.724332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.174 [2024-07-13 08:20:58.724345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.174 [2024-07-13 08:20:58.724375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.174 qpair failed and we were unable to recover it. 00:34:07.174 [2024-07-13 08:20:58.734054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.174 [2024-07-13 08:20:58.734182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.174 [2024-07-13 08:20:58.734208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.174 [2024-07-13 08:20:58.734223] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.174 [2024-07-13 08:20:58.734237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.174 [2024-07-13 08:20:58.734267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.174 qpair failed and we were unable to recover it. 00:34:07.174 [2024-07-13 08:20:58.744253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.174 [2024-07-13 08:20:58.744386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.174 [2024-07-13 08:20:58.744412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.174 [2024-07-13 08:20:58.744427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.174 [2024-07-13 08:20:58.744440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.174 [2024-07-13 08:20:58.744470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.174 qpair failed and we were unable to recover it. 00:34:07.174 [2024-07-13 08:20:58.754222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.174 [2024-07-13 08:20:58.754367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.174 [2024-07-13 08:20:58.754393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.174 [2024-07-13 08:20:58.754408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.174 [2024-07-13 08:20:58.754421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.174 [2024-07-13 08:20:58.754467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.174 qpair failed and we were unable to recover it. 00:34:07.174 [2024-07-13 08:20:58.764231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.174 [2024-07-13 08:20:58.764357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.174 [2024-07-13 08:20:58.764384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.174 [2024-07-13 08:20:58.764399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.174 [2024-07-13 08:20:58.764413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.174 [2024-07-13 08:20:58.764458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.174 qpair failed and we were unable to recover it. 00:34:07.174 [2024-07-13 08:20:58.774219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.174 [2024-07-13 08:20:58.774341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.174 [2024-07-13 08:20:58.774367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.174 [2024-07-13 08:20:58.774382] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.174 [2024-07-13 08:20:58.774395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.174 [2024-07-13 08:20:58.774427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.174 qpair failed and we were unable to recover it. 00:34:07.174 [2024-07-13 08:20:58.784211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.174 [2024-07-13 08:20:58.784352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.174 [2024-07-13 08:20:58.784378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.174 [2024-07-13 08:20:58.784392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.174 [2024-07-13 08:20:58.784406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.174 [2024-07-13 08:20:58.784452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.174 qpair failed and we were unable to recover it. 00:34:07.174 [2024-07-13 08:20:58.794269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.174 [2024-07-13 08:20:58.794398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.174 [2024-07-13 08:20:58.794426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.174 [2024-07-13 08:20:58.794442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.174 [2024-07-13 08:20:58.794471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.174 [2024-07-13 08:20:58.794501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.174 qpair failed and we were unable to recover it. 00:34:07.174 [2024-07-13 08:20:58.804279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.174 [2024-07-13 08:20:58.804402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.174 [2024-07-13 08:20:58.804429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.174 [2024-07-13 08:20:58.804451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.174 [2024-07-13 08:20:58.804466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.174 [2024-07-13 08:20:58.804497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.174 qpair failed and we were unable to recover it. 00:34:07.174 [2024-07-13 08:20:58.814335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.174 [2024-07-13 08:20:58.814465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.174 [2024-07-13 08:20:58.814491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.174 [2024-07-13 08:20:58.814506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.174 [2024-07-13 08:20:58.814519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.174 [2024-07-13 08:20:58.814550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.174 qpair failed and we were unable to recover it. 00:34:07.174 [2024-07-13 08:20:58.824333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.174 [2024-07-13 08:20:58.824462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.174 [2024-07-13 08:20:58.824487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.174 [2024-07-13 08:20:58.824503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.174 [2024-07-13 08:20:58.824517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.174 [2024-07-13 08:20:58.824547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.174 qpair failed and we were unable to recover it. 00:34:07.174 [2024-07-13 08:20:58.834390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.174 [2024-07-13 08:20:58.834520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.174 [2024-07-13 08:20:58.834546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.174 [2024-07-13 08:20:58.834562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.174 [2024-07-13 08:20:58.834576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.174 [2024-07-13 08:20:58.834621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.174 qpair failed and we were unable to recover it. 00:34:07.174 [2024-07-13 08:20:58.844393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.174 [2024-07-13 08:20:58.844522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.174 [2024-07-13 08:20:58.844548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.174 [2024-07-13 08:20:58.844563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.174 [2024-07-13 08:20:58.844577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.174 [2024-07-13 08:20:58.844607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.174 qpair failed and we were unable to recover it. 00:34:07.174 [2024-07-13 08:20:58.854409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.174 [2024-07-13 08:20:58.854533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.174 [2024-07-13 08:20:58.854559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.174 [2024-07-13 08:20:58.854574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.174 [2024-07-13 08:20:58.854587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.174 [2024-07-13 08:20:58.854616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.174 qpair failed and we were unable to recover it. 00:34:07.174 [2024-07-13 08:20:58.864479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.174 [2024-07-13 08:20:58.864601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.175 [2024-07-13 08:20:58.864628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.175 [2024-07-13 08:20:58.864643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.175 [2024-07-13 08:20:58.864655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.175 [2024-07-13 08:20:58.864685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.175 qpair failed and we were unable to recover it. 00:34:07.175 [2024-07-13 08:20:58.874458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.175 [2024-07-13 08:20:58.874584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.175 [2024-07-13 08:20:58.874611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.175 [2024-07-13 08:20:58.874626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.175 [2024-07-13 08:20:58.874640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.175 [2024-07-13 08:20:58.874670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.175 qpair failed and we were unable to recover it. 00:34:07.175 [2024-07-13 08:20:58.884455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.175 [2024-07-13 08:20:58.884627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.175 [2024-07-13 08:20:58.884654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.175 [2024-07-13 08:20:58.884669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.175 [2024-07-13 08:20:58.884683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.175 [2024-07-13 08:20:58.884713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.175 qpair failed and we were unable to recover it. 00:34:07.175 [2024-07-13 08:20:58.894536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.175 [2024-07-13 08:20:58.894705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.175 [2024-07-13 08:20:58.894738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.175 [2024-07-13 08:20:58.894754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.175 [2024-07-13 08:20:58.894768] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.175 [2024-07-13 08:20:58.894798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.175 qpair failed and we were unable to recover it. 00:34:07.175 [2024-07-13 08:20:58.904570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.434 [2024-07-13 08:20:58.904700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.434 [2024-07-13 08:20:58.904728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.434 [2024-07-13 08:20:58.904743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.434 [2024-07-13 08:20:58.904771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.434 [2024-07-13 08:20:58.904801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.434 qpair failed and we were unable to recover it. 00:34:07.434 [2024-07-13 08:20:58.914654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.434 [2024-07-13 08:20:58.914816] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.434 [2024-07-13 08:20:58.914843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.434 [2024-07-13 08:20:58.914884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.434 [2024-07-13 08:20:58.914903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.434 [2024-07-13 08:20:58.914935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.434 qpair failed and we were unable to recover it. 00:34:07.434 [2024-07-13 08:20:58.924585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.434 [2024-07-13 08:20:58.924705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.434 [2024-07-13 08:20:58.924733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.434 [2024-07-13 08:20:58.924748] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.434 [2024-07-13 08:20:58.924760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.434 [2024-07-13 08:20:58.924791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.434 qpair failed and we were unable to recover it. 00:34:07.434 [2024-07-13 08:20:58.934703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.434 [2024-07-13 08:20:58.934824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.434 [2024-07-13 08:20:58.934850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.434 [2024-07-13 08:20:58.934872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.434 [2024-07-13 08:20:58.934888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.434 [2024-07-13 08:20:58.934924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.434 qpair failed and we were unable to recover it. 00:34:07.434 [2024-07-13 08:20:58.944681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.434 [2024-07-13 08:20:58.944798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.434 [2024-07-13 08:20:58.944825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.434 [2024-07-13 08:20:58.944840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.434 [2024-07-13 08:20:58.944853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.434 [2024-07-13 08:20:58.944902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.435 qpair failed and we were unable to recover it. 00:34:07.435 [2024-07-13 08:20:58.954704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.435 [2024-07-13 08:20:58.954850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.435 [2024-07-13 08:20:58.954886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.435 [2024-07-13 08:20:58.954902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.435 [2024-07-13 08:20:58.954915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.435 [2024-07-13 08:20:58.954946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.435 qpair failed and we were unable to recover it. 00:34:07.435 [2024-07-13 08:20:58.964714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.435 [2024-07-13 08:20:58.964838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.435 [2024-07-13 08:20:58.964873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.435 [2024-07-13 08:20:58.964891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.435 [2024-07-13 08:20:58.964905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.435 [2024-07-13 08:20:58.964938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.435 qpair failed and we were unable to recover it. 00:34:07.435 [2024-07-13 08:20:58.974722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.435 [2024-07-13 08:20:58.974854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.435 [2024-07-13 08:20:58.974888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.435 [2024-07-13 08:20:58.974904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.435 [2024-07-13 08:20:58.974917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.435 [2024-07-13 08:20:58.974948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.435 qpair failed and we were unable to recover it. 00:34:07.435 [2024-07-13 08:20:58.984764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.435 [2024-07-13 08:20:58.984903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.435 [2024-07-13 08:20:58.984934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.435 [2024-07-13 08:20:58.984950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.435 [2024-07-13 08:20:58.984965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.435 [2024-07-13 08:20:58.984995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.435 qpair failed and we were unable to recover it. 00:34:07.435 [2024-07-13 08:20:58.994806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.435 [2024-07-13 08:20:58.994942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.435 [2024-07-13 08:20:58.994968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.435 [2024-07-13 08:20:58.994983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.435 [2024-07-13 08:20:58.994997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.435 [2024-07-13 08:20:58.995028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.435 qpair failed and we were unable to recover it. 00:34:07.435 [2024-07-13 08:20:59.004832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.435 [2024-07-13 08:20:59.004964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.435 [2024-07-13 08:20:59.004991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.435 [2024-07-13 08:20:59.005006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.435 [2024-07-13 08:20:59.005019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.435 [2024-07-13 08:20:59.005050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.435 qpair failed and we were unable to recover it. 00:34:07.435 [2024-07-13 08:20:59.014875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.435 [2024-07-13 08:20:59.014995] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.435 [2024-07-13 08:20:59.015022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.435 [2024-07-13 08:20:59.015037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.435 [2024-07-13 08:20:59.015050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.435 [2024-07-13 08:20:59.015093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.435 qpair failed and we were unable to recover it. 00:34:07.435 [2024-07-13 08:20:59.024958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.435 [2024-07-13 08:20:59.025076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.435 [2024-07-13 08:20:59.025102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.435 [2024-07-13 08:20:59.025117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.435 [2024-07-13 08:20:59.025130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.435 [2024-07-13 08:20:59.025167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.435 qpair failed and we were unable to recover it. 00:34:07.435 [2024-07-13 08:20:59.034937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.435 [2024-07-13 08:20:59.035066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.435 [2024-07-13 08:20:59.035092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.435 [2024-07-13 08:20:59.035107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.435 [2024-07-13 08:20:59.035120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.435 [2024-07-13 08:20:59.035151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.435 qpair failed and we were unable to recover it. 00:34:07.435 [2024-07-13 08:20:59.044943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.435 [2024-07-13 08:20:59.045067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.435 [2024-07-13 08:20:59.045093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.435 [2024-07-13 08:20:59.045108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.435 [2024-07-13 08:20:59.045121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.435 [2024-07-13 08:20:59.045152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.435 qpair failed and we were unable to recover it. 00:34:07.435 [2024-07-13 08:20:59.054990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.435 [2024-07-13 08:20:59.055110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.435 [2024-07-13 08:20:59.055136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.435 [2024-07-13 08:20:59.055151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.435 [2024-07-13 08:20:59.055164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.435 [2024-07-13 08:20:59.055195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.435 qpair failed and we were unable to recover it. 00:34:07.435 [2024-07-13 08:20:59.064993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.435 [2024-07-13 08:20:59.065115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.435 [2024-07-13 08:20:59.065141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.435 [2024-07-13 08:20:59.065156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.435 [2024-07-13 08:20:59.065169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.435 [2024-07-13 08:20:59.065200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.435 qpair failed and we were unable to recover it. 00:34:07.435 [2024-07-13 08:20:59.075060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.435 [2024-07-13 08:20:59.075240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.435 [2024-07-13 08:20:59.075267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.435 [2024-07-13 08:20:59.075282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.435 [2024-07-13 08:20:59.075296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.435 [2024-07-13 08:20:59.075327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.435 qpair failed and we were unable to recover it. 00:34:07.435 [2024-07-13 08:20:59.085107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.435 [2024-07-13 08:20:59.085283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.435 [2024-07-13 08:20:59.085323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.435 [2024-07-13 08:20:59.085341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.435 [2024-07-13 08:20:59.085355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.435 [2024-07-13 08:20:59.085400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.435 qpair failed and we were unable to recover it. 00:34:07.435 [2024-07-13 08:20:59.095128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.436 [2024-07-13 08:20:59.095255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.436 [2024-07-13 08:20:59.095283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.436 [2024-07-13 08:20:59.095302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.436 [2024-07-13 08:20:59.095332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.436 [2024-07-13 08:20:59.095362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.436 qpair failed and we were unable to recover it. 00:34:07.436 [2024-07-13 08:20:59.105121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.436 [2024-07-13 08:20:59.105244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.436 [2024-07-13 08:20:59.105271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.436 [2024-07-13 08:20:59.105286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.436 [2024-07-13 08:20:59.105299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.436 [2024-07-13 08:20:59.105331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.436 qpair failed and we were unable to recover it. 00:34:07.436 [2024-07-13 08:20:59.115164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.436 [2024-07-13 08:20:59.115303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.436 [2024-07-13 08:20:59.115330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.436 [2024-07-13 08:20:59.115346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.436 [2024-07-13 08:20:59.115369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.436 [2024-07-13 08:20:59.115411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.436 qpair failed and we were unable to recover it. 00:34:07.436 [2024-07-13 08:20:59.125222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.436 [2024-07-13 08:20:59.125350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.436 [2024-07-13 08:20:59.125376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.436 [2024-07-13 08:20:59.125391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.436 [2024-07-13 08:20:59.125403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.436 [2024-07-13 08:20:59.125434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.436 qpair failed and we were unable to recover it. 00:34:07.436 [2024-07-13 08:20:59.135202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.436 [2024-07-13 08:20:59.135320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.436 [2024-07-13 08:20:59.135346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.436 [2024-07-13 08:20:59.135362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.436 [2024-07-13 08:20:59.135375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.436 [2024-07-13 08:20:59.135406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.436 qpair failed and we were unable to recover it. 00:34:07.436 [2024-07-13 08:20:59.145259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.436 [2024-07-13 08:20:59.145399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.436 [2024-07-13 08:20:59.145426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.436 [2024-07-13 08:20:59.145442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.436 [2024-07-13 08:20:59.145455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.436 [2024-07-13 08:20:59.145486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.436 qpair failed and we were unable to recover it. 00:34:07.436 [2024-07-13 08:20:59.155301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.436 [2024-07-13 08:20:59.155446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.436 [2024-07-13 08:20:59.155473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.436 [2024-07-13 08:20:59.155488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.436 [2024-07-13 08:20:59.155501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.436 [2024-07-13 08:20:59.155546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.436 qpair failed and we were unable to recover it. 00:34:07.436 [2024-07-13 08:20:59.165328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.436 [2024-07-13 08:20:59.165479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.436 [2024-07-13 08:20:59.165506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.436 [2024-07-13 08:20:59.165521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.436 [2024-07-13 08:20:59.165534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.436 [2024-07-13 08:20:59.165564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.436 qpair failed and we were unable to recover it. 00:34:07.696 [2024-07-13 08:20:59.175391] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.696 [2024-07-13 08:20:59.175515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.696 [2024-07-13 08:20:59.175541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.696 [2024-07-13 08:20:59.175556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.696 [2024-07-13 08:20:59.175570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.696 [2024-07-13 08:20:59.175601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.696 qpair failed and we were unable to recover it. 00:34:07.696 [2024-07-13 08:20:59.185364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.696 [2024-07-13 08:20:59.185521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.696 [2024-07-13 08:20:59.185548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.696 [2024-07-13 08:20:59.185563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.696 [2024-07-13 08:20:59.185576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.696 [2024-07-13 08:20:59.185606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.696 qpair failed and we were unable to recover it. 00:34:07.696 [2024-07-13 08:20:59.195407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.696 [2024-07-13 08:20:59.195537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.696 [2024-07-13 08:20:59.195563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.696 [2024-07-13 08:20:59.195578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.696 [2024-07-13 08:20:59.195591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.696 [2024-07-13 08:20:59.195622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.696 qpair failed and we were unable to recover it. 00:34:07.696 [2024-07-13 08:20:59.205424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.696 [2024-07-13 08:20:59.205586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.696 [2024-07-13 08:20:59.205612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.696 [2024-07-13 08:20:59.205634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.696 [2024-07-13 08:20:59.205648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.696 [2024-07-13 08:20:59.205679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.696 qpair failed and we were unable to recover it. 00:34:07.696 [2024-07-13 08:20:59.215522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.696 [2024-07-13 08:20:59.215645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.696 [2024-07-13 08:20:59.215672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.696 [2024-07-13 08:20:59.215687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.696 [2024-07-13 08:20:59.215702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.696 [2024-07-13 08:20:59.215732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.696 qpair failed and we were unable to recover it. 00:34:07.696 [2024-07-13 08:20:59.225473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.696 [2024-07-13 08:20:59.225661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.696 [2024-07-13 08:20:59.225688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.696 [2024-07-13 08:20:59.225703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.696 [2024-07-13 08:20:59.225731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.696 [2024-07-13 08:20:59.225761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.696 qpair failed and we were unable to recover it. 00:34:07.696 [2024-07-13 08:20:59.235490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.696 [2024-07-13 08:20:59.235623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.696 [2024-07-13 08:20:59.235649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.696 [2024-07-13 08:20:59.235664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.696 [2024-07-13 08:20:59.235677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.696 [2024-07-13 08:20:59.235707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.696 qpair failed and we were unable to recover it. 00:34:07.696 [2024-07-13 08:20:59.245518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.696 [2024-07-13 08:20:59.245651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.696 [2024-07-13 08:20:59.245677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.696 [2024-07-13 08:20:59.245692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.696 [2024-07-13 08:20:59.245706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.696 [2024-07-13 08:20:59.245749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.696 qpair failed and we were unable to recover it. 00:34:07.696 [2024-07-13 08:20:59.255568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.696 [2024-07-13 08:20:59.255692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.696 [2024-07-13 08:20:59.255718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.696 [2024-07-13 08:20:59.255733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.696 [2024-07-13 08:20:59.255747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.696 [2024-07-13 08:20:59.255777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.696 qpair failed and we were unable to recover it. 00:34:07.696 [2024-07-13 08:20:59.265545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.696 [2024-07-13 08:20:59.265670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.696 [2024-07-13 08:20:59.265696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.696 [2024-07-13 08:20:59.265711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.696 [2024-07-13 08:20:59.265724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.696 [2024-07-13 08:20:59.265754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.696 qpair failed and we were unable to recover it. 00:34:07.696 [2024-07-13 08:20:59.275690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.696 [2024-07-13 08:20:59.275858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.696 [2024-07-13 08:20:59.275895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.696 [2024-07-13 08:20:59.275911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.696 [2024-07-13 08:20:59.275924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.696 [2024-07-13 08:20:59.275954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.696 qpair failed and we were unable to recover it. 00:34:07.696 [2024-07-13 08:20:59.285718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.696 [2024-07-13 08:20:59.285858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.696 [2024-07-13 08:20:59.285892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.696 [2024-07-13 08:20:59.285907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.696 [2024-07-13 08:20:59.285921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.696 [2024-07-13 08:20:59.285951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.696 qpair failed and we were unable to recover it. 00:34:07.696 [2024-07-13 08:20:59.295651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.696 [2024-07-13 08:20:59.295774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.696 [2024-07-13 08:20:59.295800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.696 [2024-07-13 08:20:59.295821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.696 [2024-07-13 08:20:59.295836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.697 [2024-07-13 08:20:59.295887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.697 qpair failed and we were unable to recover it. 00:34:07.697 [2024-07-13 08:20:59.305671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.697 [2024-07-13 08:20:59.305796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.697 [2024-07-13 08:20:59.305822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.697 [2024-07-13 08:20:59.305837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.697 [2024-07-13 08:20:59.305851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.697 [2024-07-13 08:20:59.305888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.697 qpair failed and we were unable to recover it. 00:34:07.697 [2024-07-13 08:20:59.315730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.697 [2024-07-13 08:20:59.315880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.697 [2024-07-13 08:20:59.315906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.697 [2024-07-13 08:20:59.315922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.697 [2024-07-13 08:20:59.315935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.697 [2024-07-13 08:20:59.315965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.697 qpair failed and we were unable to recover it. 00:34:07.697 [2024-07-13 08:20:59.325739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.697 [2024-07-13 08:20:59.325901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.697 [2024-07-13 08:20:59.325928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.697 [2024-07-13 08:20:59.325943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.697 [2024-07-13 08:20:59.325956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.697 [2024-07-13 08:20:59.325986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.697 qpair failed and we were unable to recover it. 00:34:07.697 [2024-07-13 08:20:59.335753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.697 [2024-07-13 08:20:59.335911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.697 [2024-07-13 08:20:59.335938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.697 [2024-07-13 08:20:59.335953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.697 [2024-07-13 08:20:59.335967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.697 [2024-07-13 08:20:59.335996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.697 qpair failed and we were unable to recover it. 00:34:07.697 [2024-07-13 08:20:59.345809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.697 [2024-07-13 08:20:59.345934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.697 [2024-07-13 08:20:59.345960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.697 [2024-07-13 08:20:59.345975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.697 [2024-07-13 08:20:59.345989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.697 [2024-07-13 08:20:59.346019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.697 qpair failed and we were unable to recover it. 00:34:07.697 [2024-07-13 08:20:59.355842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.697 [2024-07-13 08:20:59.355987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.697 [2024-07-13 08:20:59.356013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.697 [2024-07-13 08:20:59.356028] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.697 [2024-07-13 08:20:59.356043] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.697 [2024-07-13 08:20:59.356073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.697 qpair failed and we were unable to recover it. 00:34:07.697 [2024-07-13 08:20:59.365880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.697 [2024-07-13 08:20:59.366029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.697 [2024-07-13 08:20:59.366055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.697 [2024-07-13 08:20:59.366070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.697 [2024-07-13 08:20:59.366084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.697 [2024-07-13 08:20:59.366114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.697 qpair failed and we were unable to recover it. 00:34:07.697 [2024-07-13 08:20:59.375876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.697 [2024-07-13 08:20:59.376042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.697 [2024-07-13 08:20:59.376068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.697 [2024-07-13 08:20:59.376083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.697 [2024-07-13 08:20:59.376097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.697 [2024-07-13 08:20:59.376127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.697 qpair failed and we were unable to recover it. 00:34:07.697 [2024-07-13 08:20:59.385903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.697 [2024-07-13 08:20:59.386025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.697 [2024-07-13 08:20:59.386056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.697 [2024-07-13 08:20:59.386072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.697 [2024-07-13 08:20:59.386086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.697 [2024-07-13 08:20:59.386116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.697 qpair failed and we were unable to recover it. 00:34:07.697 [2024-07-13 08:20:59.395982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.697 [2024-07-13 08:20:59.396127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.697 [2024-07-13 08:20:59.396153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.697 [2024-07-13 08:20:59.396168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.697 [2024-07-13 08:20:59.396182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.697 [2024-07-13 08:20:59.396212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.697 qpair failed and we were unable to recover it. 00:34:07.697 [2024-07-13 08:20:59.406020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.697 [2024-07-13 08:20:59.406202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.697 [2024-07-13 08:20:59.406228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.697 [2024-07-13 08:20:59.406243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.697 [2024-07-13 08:20:59.406272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.697 [2024-07-13 08:20:59.406301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.697 qpair failed and we were unable to recover it. 00:34:07.697 [2024-07-13 08:20:59.415991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.697 [2024-07-13 08:20:59.416123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.697 [2024-07-13 08:20:59.416149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.697 [2024-07-13 08:20:59.416164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.697 [2024-07-13 08:20:59.416178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.697 [2024-07-13 08:20:59.416208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.697 qpair failed and we were unable to recover it. 00:34:07.697 [2024-07-13 08:20:59.426056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.697 [2024-07-13 08:20:59.426184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.697 [2024-07-13 08:20:59.426210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.697 [2024-07-13 08:20:59.426225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.697 [2024-07-13 08:20:59.426239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.697 [2024-07-13 08:20:59.426301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.697 qpair failed and we were unable to recover it. 00:34:07.958 [2024-07-13 08:20:59.436095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.958 [2024-07-13 08:20:59.436226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.958 [2024-07-13 08:20:59.436252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.958 [2024-07-13 08:20:59.436266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.958 [2024-07-13 08:20:59.436280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.958 [2024-07-13 08:20:59.436309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.958 qpair failed and we were unable to recover it. 00:34:07.958 [2024-07-13 08:20:59.446114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.958 [2024-07-13 08:20:59.446291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.958 [2024-07-13 08:20:59.446317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.958 [2024-07-13 08:20:59.446332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.958 [2024-07-13 08:20:59.446346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.958 [2024-07-13 08:20:59.446375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.958 qpair failed and we were unable to recover it. 00:34:07.958 [2024-07-13 08:20:59.456146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.958 [2024-07-13 08:20:59.456272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.958 [2024-07-13 08:20:59.456299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.958 [2024-07-13 08:20:59.456314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.958 [2024-07-13 08:20:59.456327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.958 [2024-07-13 08:20:59.456357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.958 qpair failed and we were unable to recover it. 00:34:07.958 [2024-07-13 08:20:59.466151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.958 [2024-07-13 08:20:59.466279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.958 [2024-07-13 08:20:59.466305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.958 [2024-07-13 08:20:59.466320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.958 [2024-07-13 08:20:59.466334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.958 [2024-07-13 08:20:59.466364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.958 qpair failed and we were unable to recover it. 00:34:07.958 [2024-07-13 08:20:59.476178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.958 [2024-07-13 08:20:59.476313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.958 [2024-07-13 08:20:59.476344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.958 [2024-07-13 08:20:59.476360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.958 [2024-07-13 08:20:59.476374] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.958 [2024-07-13 08:20:59.476418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.958 qpair failed and we were unable to recover it. 00:34:07.958 [2024-07-13 08:20:59.486242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.958 [2024-07-13 08:20:59.486388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.958 [2024-07-13 08:20:59.486414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.958 [2024-07-13 08:20:59.486429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.958 [2024-07-13 08:20:59.486443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.958 [2024-07-13 08:20:59.486472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.958 qpair failed and we were unable to recover it. 00:34:07.958 [2024-07-13 08:20:59.496251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.958 [2024-07-13 08:20:59.496382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.958 [2024-07-13 08:20:59.496407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.958 [2024-07-13 08:20:59.496422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.958 [2024-07-13 08:20:59.496451] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.958 [2024-07-13 08:20:59.496480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.958 qpair failed and we were unable to recover it. 00:34:07.958 [2024-07-13 08:20:59.506269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.958 [2024-07-13 08:20:59.506407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.958 [2024-07-13 08:20:59.506433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.958 [2024-07-13 08:20:59.506448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.959 [2024-07-13 08:20:59.506461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.959 [2024-07-13 08:20:59.506492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.959 qpair failed and we were unable to recover it. 00:34:07.959 [2024-07-13 08:20:59.516313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.959 [2024-07-13 08:20:59.516444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.959 [2024-07-13 08:20:59.516470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.959 [2024-07-13 08:20:59.516485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.959 [2024-07-13 08:20:59.516504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.959 [2024-07-13 08:20:59.516537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.959 qpair failed and we were unable to recover it. 00:34:07.959 [2024-07-13 08:20:59.526336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.959 [2024-07-13 08:20:59.526495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.959 [2024-07-13 08:20:59.526521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.959 [2024-07-13 08:20:59.526536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.959 [2024-07-13 08:20:59.526551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.959 [2024-07-13 08:20:59.526580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.959 qpair failed and we were unable to recover it. 00:34:07.959 [2024-07-13 08:20:59.536379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.959 [2024-07-13 08:20:59.536513] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.959 [2024-07-13 08:20:59.536538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.959 [2024-07-13 08:20:59.536553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.959 [2024-07-13 08:20:59.536565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.959 [2024-07-13 08:20:59.536595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.959 qpair failed and we were unable to recover it. 00:34:07.959 [2024-07-13 08:20:59.546384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.959 [2024-07-13 08:20:59.546533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.959 [2024-07-13 08:20:59.546563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.959 [2024-07-13 08:20:59.546581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.959 [2024-07-13 08:20:59.546596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.959 [2024-07-13 08:20:59.546642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.959 qpair failed and we were unable to recover it. 00:34:07.959 [2024-07-13 08:20:59.556395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.959 [2024-07-13 08:20:59.556529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.959 [2024-07-13 08:20:59.556555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.959 [2024-07-13 08:20:59.556571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.959 [2024-07-13 08:20:59.556584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.959 [2024-07-13 08:20:59.556614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.959 qpair failed and we were unable to recover it. 00:34:07.959 [2024-07-13 08:20:59.566472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.959 [2024-07-13 08:20:59.566649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.959 [2024-07-13 08:20:59.566676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.959 [2024-07-13 08:20:59.566706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.959 [2024-07-13 08:20:59.566719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.959 [2024-07-13 08:20:59.566748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.959 qpair failed and we were unable to recover it. 00:34:07.959 [2024-07-13 08:20:59.576508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.959 [2024-07-13 08:20:59.576646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.959 [2024-07-13 08:20:59.576671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.959 [2024-07-13 08:20:59.576685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.959 [2024-07-13 08:20:59.576697] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.959 [2024-07-13 08:20:59.576726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.959 qpair failed and we were unable to recover it. 00:34:07.959 [2024-07-13 08:20:59.586464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.959 [2024-07-13 08:20:59.586591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.959 [2024-07-13 08:20:59.586617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.959 [2024-07-13 08:20:59.586632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.959 [2024-07-13 08:20:59.586646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.959 [2024-07-13 08:20:59.586676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.959 qpair failed and we were unable to recover it. 00:34:07.959 [2024-07-13 08:20:59.596539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.959 [2024-07-13 08:20:59.596672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.959 [2024-07-13 08:20:59.596698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.959 [2024-07-13 08:20:59.596712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.959 [2024-07-13 08:20:59.596726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.959 [2024-07-13 08:20:59.596762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.959 qpair failed and we were unable to recover it. 00:34:07.959 [2024-07-13 08:20:59.606568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.959 [2024-07-13 08:20:59.606698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.959 [2024-07-13 08:20:59.606724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.959 [2024-07-13 08:20:59.606745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.959 [2024-07-13 08:20:59.606759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.959 [2024-07-13 08:20:59.606790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.959 qpair failed and we were unable to recover it. 00:34:07.959 [2024-07-13 08:20:59.616560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.959 [2024-07-13 08:20:59.616682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.959 [2024-07-13 08:20:59.616712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.960 [2024-07-13 08:20:59.616728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.960 [2024-07-13 08:20:59.616741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.960 [2024-07-13 08:20:59.616772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.960 qpair failed and we were unable to recover it. 00:34:07.960 [2024-07-13 08:20:59.626582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.960 [2024-07-13 08:20:59.626710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.960 [2024-07-13 08:20:59.626736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.960 [2024-07-13 08:20:59.626751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.960 [2024-07-13 08:20:59.626764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.960 [2024-07-13 08:20:59.626794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.960 qpair failed and we were unable to recover it. 00:34:07.960 [2024-07-13 08:20:59.636697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.960 [2024-07-13 08:20:59.636834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.960 [2024-07-13 08:20:59.636860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.960 [2024-07-13 08:20:59.636884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.960 [2024-07-13 08:20:59.636898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.960 [2024-07-13 08:20:59.636929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.960 qpair failed and we were unable to recover it. 00:34:07.960 [2024-07-13 08:20:59.646662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.960 [2024-07-13 08:20:59.646802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.960 [2024-07-13 08:20:59.646829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.960 [2024-07-13 08:20:59.646844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.960 [2024-07-13 08:20:59.646861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.960 [2024-07-13 08:20:59.646903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.960 qpair failed and we were unable to recover it. 00:34:07.960 [2024-07-13 08:20:59.656734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.960 [2024-07-13 08:20:59.656860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.960 [2024-07-13 08:20:59.656893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.960 [2024-07-13 08:20:59.656909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.960 [2024-07-13 08:20:59.656922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.960 [2024-07-13 08:20:59.656952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.960 qpair failed and we were unable to recover it. 00:34:07.960 [2024-07-13 08:20:59.666738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.960 [2024-07-13 08:20:59.666862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.960 [2024-07-13 08:20:59.666894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.960 [2024-07-13 08:20:59.666909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.960 [2024-07-13 08:20:59.666924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.960 [2024-07-13 08:20:59.666967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.960 qpair failed and we were unable to recover it. 00:34:07.960 [2024-07-13 08:20:59.676788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.960 [2024-07-13 08:20:59.676961] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.960 [2024-07-13 08:20:59.676987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.960 [2024-07-13 08:20:59.677002] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.960 [2024-07-13 08:20:59.677015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.960 [2024-07-13 08:20:59.677045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.960 qpair failed and we were unable to recover it. 00:34:07.960 [2024-07-13 08:20:59.686794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:07.960 [2024-07-13 08:20:59.686955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:07.960 [2024-07-13 08:20:59.686983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:07.960 [2024-07-13 08:20:59.686998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:07.960 [2024-07-13 08:20:59.687011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:07.960 [2024-07-13 08:20:59.687041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:07.960 qpair failed and we were unable to recover it. 00:34:08.220 [2024-07-13 08:20:59.696872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.220 [2024-07-13 08:20:59.697006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.220 [2024-07-13 08:20:59.697033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.220 [2024-07-13 08:20:59.697068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.220 [2024-07-13 08:20:59.697082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.220 [2024-07-13 08:20:59.697114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.220 qpair failed and we were unable to recover it. 00:34:08.220 [2024-07-13 08:20:59.706840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.220 [2024-07-13 08:20:59.706991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.220 [2024-07-13 08:20:59.707020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.220 [2024-07-13 08:20:59.707036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.220 [2024-07-13 08:20:59.707053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.220 [2024-07-13 08:20:59.707086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.220 qpair failed and we were unable to recover it. 00:34:08.220 [2024-07-13 08:20:59.716864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.220 [2024-07-13 08:20:59.716999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.220 [2024-07-13 08:20:59.717027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.220 [2024-07-13 08:20:59.717042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.220 [2024-07-13 08:20:59.717056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.220 [2024-07-13 08:20:59.717086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.220 qpair failed and we were unable to recover it. 00:34:08.220 [2024-07-13 08:20:59.726916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.220 [2024-07-13 08:20:59.727039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.220 [2024-07-13 08:20:59.727066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.220 [2024-07-13 08:20:59.727082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.220 [2024-07-13 08:20:59.727095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.220 [2024-07-13 08:20:59.727126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.221 qpair failed and we were unable to recover it. 00:34:08.221 [2024-07-13 08:20:59.736943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.221 [2024-07-13 08:20:59.737068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.221 [2024-07-13 08:20:59.737096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.221 [2024-07-13 08:20:59.737111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.221 [2024-07-13 08:20:59.737124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.221 [2024-07-13 08:20:59.737155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.221 qpair failed and we were unable to recover it. 00:34:08.221 [2024-07-13 08:20:59.746947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.221 [2024-07-13 08:20:59.747095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.221 [2024-07-13 08:20:59.747122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.221 [2024-07-13 08:20:59.747138] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.221 [2024-07-13 08:20:59.747151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.221 [2024-07-13 08:20:59.747180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.221 qpair failed and we were unable to recover it. 00:34:08.221 [2024-07-13 08:20:59.757105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.221 [2024-07-13 08:20:59.757230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.221 [2024-07-13 08:20:59.757256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.221 [2024-07-13 08:20:59.757271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.221 [2024-07-13 08:20:59.757285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.221 [2024-07-13 08:20:59.757314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.221 qpair failed and we were unable to recover it. 00:34:08.221 [2024-07-13 08:20:59.766985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.221 [2024-07-13 08:20:59.767109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.221 [2024-07-13 08:20:59.767135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.221 [2024-07-13 08:20:59.767149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.221 [2024-07-13 08:20:59.767163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.221 [2024-07-13 08:20:59.767193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.221 qpair failed and we were unable to recover it. 00:34:08.221 [2024-07-13 08:20:59.777080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.221 [2024-07-13 08:20:59.777202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.221 [2024-07-13 08:20:59.777226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.221 [2024-07-13 08:20:59.777241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.221 [2024-07-13 08:20:59.777255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.221 [2024-07-13 08:20:59.777286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.221 qpair failed and we were unable to recover it. 00:34:08.221 [2024-07-13 08:20:59.787055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.221 [2024-07-13 08:20:59.787179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.221 [2024-07-13 08:20:59.787210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.221 [2024-07-13 08:20:59.787226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.221 [2024-07-13 08:20:59.787239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.221 [2024-07-13 08:20:59.787269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.221 qpair failed and we were unable to recover it. 00:34:08.221 [2024-07-13 08:20:59.797103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.221 [2024-07-13 08:20:59.797265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.221 [2024-07-13 08:20:59.797292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.221 [2024-07-13 08:20:59.797307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.221 [2024-07-13 08:20:59.797320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.221 [2024-07-13 08:20:59.797350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.221 qpair failed and we were unable to recover it. 00:34:08.221 [2024-07-13 08:20:59.807176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.221 [2024-07-13 08:20:59.807295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.221 [2024-07-13 08:20:59.807320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.221 [2024-07-13 08:20:59.807335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.221 [2024-07-13 08:20:59.807349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.221 [2024-07-13 08:20:59.807379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.221 qpair failed and we were unable to recover it. 00:34:08.221 [2024-07-13 08:20:59.817146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.221 [2024-07-13 08:20:59.817268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.221 [2024-07-13 08:20:59.817295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.221 [2024-07-13 08:20:59.817311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.221 [2024-07-13 08:20:59.817324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.221 [2024-07-13 08:20:59.817354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.221 qpair failed and we were unable to recover it. 00:34:08.221 [2024-07-13 08:20:59.827187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.221 [2024-07-13 08:20:59.827336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.221 [2024-07-13 08:20:59.827362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.221 [2024-07-13 08:20:59.827376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.221 [2024-07-13 08:20:59.827389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.221 [2024-07-13 08:20:59.827441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.221 qpair failed and we were unable to recover it. 00:34:08.221 [2024-07-13 08:20:59.837301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.221 [2024-07-13 08:20:59.837462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.221 [2024-07-13 08:20:59.837488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.221 [2024-07-13 08:20:59.837504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.221 [2024-07-13 08:20:59.837517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.221 [2024-07-13 08:20:59.837546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.221 qpair failed and we were unable to recover it. 00:34:08.221 [2024-07-13 08:20:59.847233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.222 [2024-07-13 08:20:59.847364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.222 [2024-07-13 08:20:59.847391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.222 [2024-07-13 08:20:59.847406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.222 [2024-07-13 08:20:59.847420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.222 [2024-07-13 08:20:59.847450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.222 qpair failed and we were unable to recover it. 00:34:08.222 [2024-07-13 08:20:59.857300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.222 [2024-07-13 08:20:59.857431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.222 [2024-07-13 08:20:59.857457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.222 [2024-07-13 08:20:59.857473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.222 [2024-07-13 08:20:59.857486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.222 [2024-07-13 08:20:59.857517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.222 qpair failed and we were unable to recover it. 00:34:08.222 [2024-07-13 08:20:59.867289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.222 [2024-07-13 08:20:59.867408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.222 [2024-07-13 08:20:59.867444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.222 [2024-07-13 08:20:59.867460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.222 [2024-07-13 08:20:59.867473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.222 [2024-07-13 08:20:59.867503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.222 qpair failed and we were unable to recover it. 00:34:08.222 [2024-07-13 08:20:59.877335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.222 [2024-07-13 08:20:59.877490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.222 [2024-07-13 08:20:59.877525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.222 [2024-07-13 08:20:59.877542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.222 [2024-07-13 08:20:59.877555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.222 [2024-07-13 08:20:59.877585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.222 qpair failed and we were unable to recover it. 00:34:08.222 [2024-07-13 08:20:59.887387] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.222 [2024-07-13 08:20:59.887536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.222 [2024-07-13 08:20:59.887567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.222 [2024-07-13 08:20:59.887584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.222 [2024-07-13 08:20:59.887598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.222 [2024-07-13 08:20:59.887643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.222 qpair failed and we were unable to recover it. 00:34:08.222 [2024-07-13 08:20:59.897496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.222 [2024-07-13 08:20:59.897649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.222 [2024-07-13 08:20:59.897677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.222 [2024-07-13 08:20:59.897692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.222 [2024-07-13 08:20:59.897705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.222 [2024-07-13 08:20:59.897750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.222 qpair failed and we were unable to recover it. 00:34:08.222 [2024-07-13 08:20:59.907451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.222 [2024-07-13 08:20:59.907588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.222 [2024-07-13 08:20:59.907614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.222 [2024-07-13 08:20:59.907629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.222 [2024-07-13 08:20:59.907642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.222 [2024-07-13 08:20:59.907673] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.222 qpair failed and we were unable to recover it. 00:34:08.222 [2024-07-13 08:20:59.917451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.222 [2024-07-13 08:20:59.917577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.222 [2024-07-13 08:20:59.917604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.222 [2024-07-13 08:20:59.917619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.222 [2024-07-13 08:20:59.917638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.222 [2024-07-13 08:20:59.917669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.222 qpair failed and we were unable to recover it. 00:34:08.222 [2024-07-13 08:20:59.927502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.222 [2024-07-13 08:20:59.927627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.222 [2024-07-13 08:20:59.927653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.222 [2024-07-13 08:20:59.927669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.222 [2024-07-13 08:20:59.927682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.222 [2024-07-13 08:20:59.927727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.222 qpair failed and we were unable to recover it. 00:34:08.222 [2024-07-13 08:20:59.937487] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.222 [2024-07-13 08:20:59.937624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.222 [2024-07-13 08:20:59.937650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.222 [2024-07-13 08:20:59.937665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.222 [2024-07-13 08:20:59.937678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.222 [2024-07-13 08:20:59.937708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.222 qpair failed and we were unable to recover it. 00:34:08.222 [2024-07-13 08:20:59.947527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.222 [2024-07-13 08:20:59.947695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.222 [2024-07-13 08:20:59.947721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.222 [2024-07-13 08:20:59.947736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.222 [2024-07-13 08:20:59.947749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.222 [2024-07-13 08:20:59.947778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.223 qpair failed and we were unable to recover it. 00:34:08.483 [2024-07-13 08:20:59.957581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.483 [2024-07-13 08:20:59.957731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.483 [2024-07-13 08:20:59.957757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.483 [2024-07-13 08:20:59.957772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.483 [2024-07-13 08:20:59.957785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.483 [2024-07-13 08:20:59.957815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.483 qpair failed and we were unable to recover it. 00:34:08.483 [2024-07-13 08:20:59.967570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.483 [2024-07-13 08:20:59.967710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.483 [2024-07-13 08:20:59.967737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.483 [2024-07-13 08:20:59.967753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.483 [2024-07-13 08:20:59.967766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.483 [2024-07-13 08:20:59.967796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.483 qpair failed and we were unable to recover it. 00:34:08.483 [2024-07-13 08:20:59.977604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.483 [2024-07-13 08:20:59.977723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.483 [2024-07-13 08:20:59.977750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.483 [2024-07-13 08:20:59.977765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.483 [2024-07-13 08:20:59.977778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.483 [2024-07-13 08:20:59.977808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.483 qpair failed and we were unable to recover it. 00:34:08.483 [2024-07-13 08:20:59.987638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.483 [2024-07-13 08:20:59.987781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.483 [2024-07-13 08:20:59.987807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.483 [2024-07-13 08:20:59.987822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.483 [2024-07-13 08:20:59.987835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.483 [2024-07-13 08:20:59.987871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.483 qpair failed and we were unable to recover it. 00:34:08.483 [2024-07-13 08:20:59.997688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.483 [2024-07-13 08:20:59.997811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.483 [2024-07-13 08:20:59.997837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.483 [2024-07-13 08:20:59.997852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.483 [2024-07-13 08:20:59.997873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.483 [2024-07-13 08:20:59.997906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.483 qpair failed and we were unable to recover it. 00:34:08.483 [2024-07-13 08:21:00.007784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.483 [2024-07-13 08:21:00.007918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.483 [2024-07-13 08:21:00.007946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.483 [2024-07-13 08:21:00.007962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.484 [2024-07-13 08:21:00.007980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.484 [2024-07-13 08:21:00.008011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.484 qpair failed and we were unable to recover it. 00:34:08.484 [2024-07-13 08:21:00.017760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.484 [2024-07-13 08:21:00.017907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.484 [2024-07-13 08:21:00.017938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.484 [2024-07-13 08:21:00.017954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.484 [2024-07-13 08:21:00.017967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.484 [2024-07-13 08:21:00.017999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.484 qpair failed and we were unable to recover it. 00:34:08.484 [2024-07-13 08:21:00.027878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.484 [2024-07-13 08:21:00.028038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.484 [2024-07-13 08:21:00.028066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.484 [2024-07-13 08:21:00.028081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.484 [2024-07-13 08:21:00.028094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.484 [2024-07-13 08:21:00.028125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.484 qpair failed and we were unable to recover it. 00:34:08.484 [2024-07-13 08:21:00.037856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.484 [2024-07-13 08:21:00.038027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.484 [2024-07-13 08:21:00.038055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.484 [2024-07-13 08:21:00.038070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.484 [2024-07-13 08:21:00.038083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.484 [2024-07-13 08:21:00.038115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.484 qpair failed and we were unable to recover it. 00:34:08.484 [2024-07-13 08:21:00.047905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.484 [2024-07-13 08:21:00.048046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.484 [2024-07-13 08:21:00.048073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.484 [2024-07-13 08:21:00.048089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.484 [2024-07-13 08:21:00.048102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:08.484 [2024-07-13 08:21:00.048133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:08.484 qpair failed and we were unable to recover it. 00:34:08.484 [2024-07-13 08:21:00.057857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.484 [2024-07-13 08:21:00.057990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.484 [2024-07-13 08:21:00.058025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.484 [2024-07-13 08:21:00.058042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.484 [2024-07-13 08:21:00.058057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.484 [2024-07-13 08:21:00.058089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.484 qpair failed and we were unable to recover it. 00:34:08.484 [2024-07-13 08:21:00.067904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.484 [2024-07-13 08:21:00.068028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.484 [2024-07-13 08:21:00.068056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.484 [2024-07-13 08:21:00.068072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.484 [2024-07-13 08:21:00.068086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.484 [2024-07-13 08:21:00.068118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.484 qpair failed and we were unable to recover it. 00:34:08.484 [2024-07-13 08:21:00.077967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.484 [2024-07-13 08:21:00.078147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.484 [2024-07-13 08:21:00.078176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.484 [2024-07-13 08:21:00.078193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.484 [2024-07-13 08:21:00.078206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.484 [2024-07-13 08:21:00.078237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.484 qpair failed and we were unable to recover it. 00:34:08.484 [2024-07-13 08:21:00.087970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.484 [2024-07-13 08:21:00.088095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.484 [2024-07-13 08:21:00.088122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.484 [2024-07-13 08:21:00.088137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.484 [2024-07-13 08:21:00.088157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.484 [2024-07-13 08:21:00.088202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.484 qpair failed and we were unable to recover it. 00:34:08.484 [2024-07-13 08:21:00.098008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.484 [2024-07-13 08:21:00.098188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.484 [2024-07-13 08:21:00.098230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.484 [2024-07-13 08:21:00.098252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.484 [2024-07-13 08:21:00.098266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.484 [2024-07-13 08:21:00.098295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.484 qpair failed and we were unable to recover it. 00:34:08.484 [2024-07-13 08:21:00.108010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.484 [2024-07-13 08:21:00.108141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.484 [2024-07-13 08:21:00.108169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.484 [2024-07-13 08:21:00.108185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.484 [2024-07-13 08:21:00.108203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.484 [2024-07-13 08:21:00.108248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.484 qpair failed and we were unable to recover it. 00:34:08.484 [2024-07-13 08:21:00.118042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.484 [2024-07-13 08:21:00.118178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.484 [2024-07-13 08:21:00.118205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.484 [2024-07-13 08:21:00.118220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.484 [2024-07-13 08:21:00.118233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.484 [2024-07-13 08:21:00.118263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.485 qpair failed and we were unable to recover it. 00:34:08.485 [2024-07-13 08:21:00.128049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.485 [2024-07-13 08:21:00.128186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.485 [2024-07-13 08:21:00.128212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.485 [2024-07-13 08:21:00.128228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.485 [2024-07-13 08:21:00.128241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.485 [2024-07-13 08:21:00.128272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.485 qpair failed and we were unable to recover it. 00:34:08.485 [2024-07-13 08:21:00.138077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.485 [2024-07-13 08:21:00.138200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.485 [2024-07-13 08:21:00.138226] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.485 [2024-07-13 08:21:00.138241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.485 [2024-07-13 08:21:00.138255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.485 [2024-07-13 08:21:00.138286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.485 qpair failed and we were unable to recover it. 00:34:08.485 [2024-07-13 08:21:00.148179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.485 [2024-07-13 08:21:00.148301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.485 [2024-07-13 08:21:00.148328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.485 [2024-07-13 08:21:00.148343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.485 [2024-07-13 08:21:00.148358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.485 [2024-07-13 08:21:00.148388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.485 qpair failed and we were unable to recover it. 00:34:08.485 [2024-07-13 08:21:00.158187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.485 [2024-07-13 08:21:00.158329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.485 [2024-07-13 08:21:00.158356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.485 [2024-07-13 08:21:00.158372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.485 [2024-07-13 08:21:00.158399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.485 [2024-07-13 08:21:00.158429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.485 qpair failed and we were unable to recover it. 00:34:08.485 [2024-07-13 08:21:00.168191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.485 [2024-07-13 08:21:00.168321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.485 [2024-07-13 08:21:00.168349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.485 [2024-07-13 08:21:00.168364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.485 [2024-07-13 08:21:00.168378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.485 [2024-07-13 08:21:00.168408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.485 qpair failed and we were unable to recover it. 00:34:08.485 [2024-07-13 08:21:00.178207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.485 [2024-07-13 08:21:00.178337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.485 [2024-07-13 08:21:00.178363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.485 [2024-07-13 08:21:00.178379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.485 [2024-07-13 08:21:00.178393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.485 [2024-07-13 08:21:00.178422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.485 qpair failed and we were unable to recover it. 00:34:08.485 [2024-07-13 08:21:00.188216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.485 [2024-07-13 08:21:00.188338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.485 [2024-07-13 08:21:00.188369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.485 [2024-07-13 08:21:00.188384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.485 [2024-07-13 08:21:00.188397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.485 [2024-07-13 08:21:00.188428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.485 qpair failed and we were unable to recover it. 00:34:08.485 [2024-07-13 08:21:00.198227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.485 [2024-07-13 08:21:00.198361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.485 [2024-07-13 08:21:00.198387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.485 [2024-07-13 08:21:00.198403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.485 [2024-07-13 08:21:00.198416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.485 [2024-07-13 08:21:00.198446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.485 qpair failed and we were unable to recover it. 00:34:08.485 [2024-07-13 08:21:00.208321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.485 [2024-07-13 08:21:00.208447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.485 [2024-07-13 08:21:00.208474] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.485 [2024-07-13 08:21:00.208489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.485 [2024-07-13 08:21:00.208502] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.485 [2024-07-13 08:21:00.208533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.485 qpair failed and we were unable to recover it. 00:34:08.747 [2024-07-13 08:21:00.218283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.747 [2024-07-13 08:21:00.218409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.747 [2024-07-13 08:21:00.218436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.747 [2024-07-13 08:21:00.218451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.747 [2024-07-13 08:21:00.218466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.747 [2024-07-13 08:21:00.218497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.747 qpair failed and we were unable to recover it. 00:34:08.747 [2024-07-13 08:21:00.228424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.747 [2024-07-13 08:21:00.228547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.747 [2024-07-13 08:21:00.228573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.747 [2024-07-13 08:21:00.228589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.747 [2024-07-13 08:21:00.228604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.747 [2024-07-13 08:21:00.228656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.747 qpair failed and we were unable to recover it. 00:34:08.747 [2024-07-13 08:21:00.238435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.747 [2024-07-13 08:21:00.238566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.747 [2024-07-13 08:21:00.238594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.747 [2024-07-13 08:21:00.238610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.747 [2024-07-13 08:21:00.238624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.747 [2024-07-13 08:21:00.238654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.747 qpair failed and we were unable to recover it. 00:34:08.747 [2024-07-13 08:21:00.248410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.747 [2024-07-13 08:21:00.248541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.747 [2024-07-13 08:21:00.248568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.747 [2024-07-13 08:21:00.248583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.747 [2024-07-13 08:21:00.248596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.747 [2024-07-13 08:21:00.248628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.747 qpair failed and we were unable to recover it. 00:34:08.747 [2024-07-13 08:21:00.258401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.747 [2024-07-13 08:21:00.258555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.747 [2024-07-13 08:21:00.258582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.747 [2024-07-13 08:21:00.258597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.747 [2024-07-13 08:21:00.258611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.747 [2024-07-13 08:21:00.258641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.747 qpair failed and we were unable to recover it. 00:34:08.747 [2024-07-13 08:21:00.268477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.747 [2024-07-13 08:21:00.268644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.747 [2024-07-13 08:21:00.268671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.747 [2024-07-13 08:21:00.268686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.747 [2024-07-13 08:21:00.268699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.747 [2024-07-13 08:21:00.268728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.747 qpair failed and we were unable to recover it. 00:34:08.747 [2024-07-13 08:21:00.278456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.747 [2024-07-13 08:21:00.278583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.747 [2024-07-13 08:21:00.278615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.747 [2024-07-13 08:21:00.278632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.747 [2024-07-13 08:21:00.278646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.747 [2024-07-13 08:21:00.278677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.747 qpair failed and we were unable to recover it. 00:34:08.747 [2024-07-13 08:21:00.288551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.747 [2024-07-13 08:21:00.288691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.747 [2024-07-13 08:21:00.288718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.747 [2024-07-13 08:21:00.288733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.747 [2024-07-13 08:21:00.288747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.747 [2024-07-13 08:21:00.288778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.747 qpair failed and we were unable to recover it. 00:34:08.747 [2024-07-13 08:21:00.298515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.747 [2024-07-13 08:21:00.298687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.747 [2024-07-13 08:21:00.298714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.747 [2024-07-13 08:21:00.298729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.747 [2024-07-13 08:21:00.298744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.747 [2024-07-13 08:21:00.298774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.748 qpair failed and we were unable to recover it. 00:34:08.748 [2024-07-13 08:21:00.308544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.748 [2024-07-13 08:21:00.308671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.748 [2024-07-13 08:21:00.308697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.748 [2024-07-13 08:21:00.308713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.748 [2024-07-13 08:21:00.308728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.748 [2024-07-13 08:21:00.308759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.748 qpair failed and we were unable to recover it. 00:34:08.748 [2024-07-13 08:21:00.318580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.748 [2024-07-13 08:21:00.318710] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.748 [2024-07-13 08:21:00.318737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.748 [2024-07-13 08:21:00.318753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.748 [2024-07-13 08:21:00.318773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.748 [2024-07-13 08:21:00.318817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.748 qpair failed and we were unable to recover it. 00:34:08.748 [2024-07-13 08:21:00.328600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.748 [2024-07-13 08:21:00.328731] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.748 [2024-07-13 08:21:00.328758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.748 [2024-07-13 08:21:00.328774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.748 [2024-07-13 08:21:00.328787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.748 [2024-07-13 08:21:00.328818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.748 qpair failed and we were unable to recover it. 00:34:08.748 [2024-07-13 08:21:00.338631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.748 [2024-07-13 08:21:00.338753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.748 [2024-07-13 08:21:00.338779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.748 [2024-07-13 08:21:00.338795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.748 [2024-07-13 08:21:00.338808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.748 [2024-07-13 08:21:00.338837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.748 qpair failed and we were unable to recover it. 00:34:08.748 [2024-07-13 08:21:00.348651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.748 [2024-07-13 08:21:00.348788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.748 [2024-07-13 08:21:00.348814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.748 [2024-07-13 08:21:00.348830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.748 [2024-07-13 08:21:00.348843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.748 [2024-07-13 08:21:00.348881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.748 qpair failed and we were unable to recover it. 00:34:08.748 [2024-07-13 08:21:00.358692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.748 [2024-07-13 08:21:00.358820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.748 [2024-07-13 08:21:00.358847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.748 [2024-07-13 08:21:00.358863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.748 [2024-07-13 08:21:00.358886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.748 [2024-07-13 08:21:00.358929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.748 qpair failed and we were unable to recover it. 00:34:08.748 [2024-07-13 08:21:00.368694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.748 [2024-07-13 08:21:00.368827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.748 [2024-07-13 08:21:00.368853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.748 [2024-07-13 08:21:00.368877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.748 [2024-07-13 08:21:00.368894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.748 [2024-07-13 08:21:00.368924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.748 qpair failed and we were unable to recover it. 00:34:08.748 [2024-07-13 08:21:00.378830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.748 [2024-07-13 08:21:00.378958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.748 [2024-07-13 08:21:00.378985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.748 [2024-07-13 08:21:00.379000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.748 [2024-07-13 08:21:00.379015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.748 [2024-07-13 08:21:00.379046] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.748 qpair failed and we were unable to recover it. 00:34:08.748 [2024-07-13 08:21:00.388798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.748 [2024-07-13 08:21:00.388929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.748 [2024-07-13 08:21:00.388956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.748 [2024-07-13 08:21:00.388971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.748 [2024-07-13 08:21:00.388986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.748 [2024-07-13 08:21:00.389029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.748 qpair failed and we were unable to recover it. 00:34:08.748 [2024-07-13 08:21:00.398833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.748 [2024-07-13 08:21:00.399025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.748 [2024-07-13 08:21:00.399051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.748 [2024-07-13 08:21:00.399067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.748 [2024-07-13 08:21:00.399080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.748 [2024-07-13 08:21:00.399111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.748 qpair failed and we were unable to recover it. 00:34:08.748 [2024-07-13 08:21:00.408851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.748 [2024-07-13 08:21:00.409020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.749 [2024-07-13 08:21:00.409046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.749 [2024-07-13 08:21:00.409062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.749 [2024-07-13 08:21:00.409081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.749 [2024-07-13 08:21:00.409111] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.749 qpair failed and we were unable to recover it. 00:34:08.749 [2024-07-13 08:21:00.418884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.749 [2024-07-13 08:21:00.419009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.749 [2024-07-13 08:21:00.419036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.749 [2024-07-13 08:21:00.419051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.749 [2024-07-13 08:21:00.419064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.749 [2024-07-13 08:21:00.419094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.749 qpair failed and we were unable to recover it. 00:34:08.749 [2024-07-13 08:21:00.428882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.749 [2024-07-13 08:21:00.429049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.749 [2024-07-13 08:21:00.429076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.749 [2024-07-13 08:21:00.429092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.749 [2024-07-13 08:21:00.429105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.749 [2024-07-13 08:21:00.429135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.749 qpair failed and we were unable to recover it. 00:34:08.749 [2024-07-13 08:21:00.438956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.749 [2024-07-13 08:21:00.439084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.749 [2024-07-13 08:21:00.439111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.749 [2024-07-13 08:21:00.439126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.749 [2024-07-13 08:21:00.439139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.749 [2024-07-13 08:21:00.439169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.749 qpair failed and we were unable to recover it. 00:34:08.749 [2024-07-13 08:21:00.448954] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.749 [2024-07-13 08:21:00.449128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.749 [2024-07-13 08:21:00.449155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.749 [2024-07-13 08:21:00.449170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.749 [2024-07-13 08:21:00.449183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.749 [2024-07-13 08:21:00.449212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.749 qpair failed and we were unable to recover it. 00:34:08.749 [2024-07-13 08:21:00.458992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.749 [2024-07-13 08:21:00.459118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.749 [2024-07-13 08:21:00.459145] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.749 [2024-07-13 08:21:00.459161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.749 [2024-07-13 08:21:00.459174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.749 [2024-07-13 08:21:00.459204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.749 qpair failed and we were unable to recover it. 00:34:08.749 [2024-07-13 08:21:00.469038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:08.749 [2024-07-13 08:21:00.469177] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:08.749 [2024-07-13 08:21:00.469204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:08.749 [2024-07-13 08:21:00.469220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:08.749 [2024-07-13 08:21:00.469234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:08.749 [2024-07-13 08:21:00.469263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:08.749 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-13 08:21:00.479052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.009 [2024-07-13 08:21:00.479223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.009 [2024-07-13 08:21:00.479249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.009 [2024-07-13 08:21:00.479264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.009 [2024-07-13 08:21:00.479277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.009 [2024-07-13 08:21:00.479307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-13 08:21:00.489103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.009 [2024-07-13 08:21:00.489267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.009 [2024-07-13 08:21:00.489293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.009 [2024-07-13 08:21:00.489309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.009 [2024-07-13 08:21:00.489323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.009 [2024-07-13 08:21:00.489352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-13 08:21:00.499092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.009 [2024-07-13 08:21:00.499213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.009 [2024-07-13 08:21:00.499240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.009 [2024-07-13 08:21:00.499261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.009 [2024-07-13 08:21:00.499276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.009 [2024-07-13 08:21:00.499307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-13 08:21:00.509146] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.009 [2024-07-13 08:21:00.509267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.009 [2024-07-13 08:21:00.509295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.009 [2024-07-13 08:21:00.509310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.009 [2024-07-13 08:21:00.509323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.009 [2024-07-13 08:21:00.509354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-13 08:21:00.519200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.009 [2024-07-13 08:21:00.519340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.009 [2024-07-13 08:21:00.519366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.009 [2024-07-13 08:21:00.519381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.009 [2024-07-13 08:21:00.519394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.009 [2024-07-13 08:21:00.519426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.009 qpair failed and we were unable to recover it. 00:34:09.009 [2024-07-13 08:21:00.529196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.009 [2024-07-13 08:21:00.529326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.009 [2024-07-13 08:21:00.529353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.009 [2024-07-13 08:21:00.529369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.009 [2024-07-13 08:21:00.529382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.010 [2024-07-13 08:21:00.529412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-13 08:21:00.539243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.010 [2024-07-13 08:21:00.539368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.010 [2024-07-13 08:21:00.539395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.010 [2024-07-13 08:21:00.539411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.010 [2024-07-13 08:21:00.539425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.010 [2024-07-13 08:21:00.539454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-13 08:21:00.549332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.010 [2024-07-13 08:21:00.549448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.010 [2024-07-13 08:21:00.549475] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.010 [2024-07-13 08:21:00.549490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.010 [2024-07-13 08:21:00.549504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.010 [2024-07-13 08:21:00.549534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-13 08:21:00.559310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.010 [2024-07-13 08:21:00.559439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.010 [2024-07-13 08:21:00.559465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.010 [2024-07-13 08:21:00.559481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.010 [2024-07-13 08:21:00.559494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.010 [2024-07-13 08:21:00.559524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-13 08:21:00.569298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.010 [2024-07-13 08:21:00.569419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.010 [2024-07-13 08:21:00.569445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.010 [2024-07-13 08:21:00.569460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.010 [2024-07-13 08:21:00.569474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.010 [2024-07-13 08:21:00.569506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-13 08:21:00.579367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.010 [2024-07-13 08:21:00.579494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.010 [2024-07-13 08:21:00.579519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.010 [2024-07-13 08:21:00.579533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.010 [2024-07-13 08:21:00.579546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.010 [2024-07-13 08:21:00.579575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-13 08:21:00.589399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.010 [2024-07-13 08:21:00.589538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.010 [2024-07-13 08:21:00.589573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.010 [2024-07-13 08:21:00.589590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.010 [2024-07-13 08:21:00.589604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.010 [2024-07-13 08:21:00.589635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-13 08:21:00.599401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.010 [2024-07-13 08:21:00.599538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.010 [2024-07-13 08:21:00.599565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.010 [2024-07-13 08:21:00.599580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.010 [2024-07-13 08:21:00.599594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.010 [2024-07-13 08:21:00.599624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-13 08:21:00.609457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.010 [2024-07-13 08:21:00.609578] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.010 [2024-07-13 08:21:00.609605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.010 [2024-07-13 08:21:00.609620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.010 [2024-07-13 08:21:00.609634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.010 [2024-07-13 08:21:00.609664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-13 08:21:00.619465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.010 [2024-07-13 08:21:00.619596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.010 [2024-07-13 08:21:00.619624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.010 [2024-07-13 08:21:00.619639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.010 [2024-07-13 08:21:00.619653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.010 [2024-07-13 08:21:00.619695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-13 08:21:00.629495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.010 [2024-07-13 08:21:00.629613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.010 [2024-07-13 08:21:00.629643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.010 [2024-07-13 08:21:00.629659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.010 [2024-07-13 08:21:00.629673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.010 [2024-07-13 08:21:00.629721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-13 08:21:00.639531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.010 [2024-07-13 08:21:00.639665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.010 [2024-07-13 08:21:00.639693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.010 [2024-07-13 08:21:00.639708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.010 [2024-07-13 08:21:00.639722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.010 [2024-07-13 08:21:00.639752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.010 qpair failed and we were unable to recover it. 00:34:09.010 [2024-07-13 08:21:00.649552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.010 [2024-07-13 08:21:00.649680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.011 [2024-07-13 08:21:00.649708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.011 [2024-07-13 08:21:00.649723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.011 [2024-07-13 08:21:00.649736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.011 [2024-07-13 08:21:00.649768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-13 08:21:00.659614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.011 [2024-07-13 08:21:00.659746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.011 [2024-07-13 08:21:00.659773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.011 [2024-07-13 08:21:00.659788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.011 [2024-07-13 08:21:00.659803] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.011 [2024-07-13 08:21:00.659849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-13 08:21:00.669603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.011 [2024-07-13 08:21:00.669759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.011 [2024-07-13 08:21:00.669788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.011 [2024-07-13 08:21:00.669805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.011 [2024-07-13 08:21:00.669820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.011 [2024-07-13 08:21:00.669850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-13 08:21:00.679683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.011 [2024-07-13 08:21:00.679834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.011 [2024-07-13 08:21:00.679877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.011 [2024-07-13 08:21:00.679897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.011 [2024-07-13 08:21:00.679915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.011 [2024-07-13 08:21:00.679946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-13 08:21:00.689662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.011 [2024-07-13 08:21:00.689792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.011 [2024-07-13 08:21:00.689818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.011 [2024-07-13 08:21:00.689834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.011 [2024-07-13 08:21:00.689848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.011 [2024-07-13 08:21:00.689887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-13 08:21:00.699687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.011 [2024-07-13 08:21:00.699803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.011 [2024-07-13 08:21:00.699830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.011 [2024-07-13 08:21:00.699846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.011 [2024-07-13 08:21:00.699860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.011 [2024-07-13 08:21:00.699912] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-13 08:21:00.709712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.011 [2024-07-13 08:21:00.709839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.011 [2024-07-13 08:21:00.709875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.011 [2024-07-13 08:21:00.709894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.011 [2024-07-13 08:21:00.709907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.011 [2024-07-13 08:21:00.709938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-13 08:21:00.719745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.011 [2024-07-13 08:21:00.719902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.011 [2024-07-13 08:21:00.719929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.011 [2024-07-13 08:21:00.719944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.011 [2024-07-13 08:21:00.719957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.011 [2024-07-13 08:21:00.719993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-13 08:21:00.729857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.011 [2024-07-13 08:21:00.729991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.011 [2024-07-13 08:21:00.730017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.011 [2024-07-13 08:21:00.730032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.011 [2024-07-13 08:21:00.730046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.011 [2024-07-13 08:21:00.730076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.011 [2024-07-13 08:21:00.739802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.011 [2024-07-13 08:21:00.739932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.011 [2024-07-13 08:21:00.739957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.011 [2024-07-13 08:21:00.739972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.011 [2024-07-13 08:21:00.739986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.011 [2024-07-13 08:21:00.740016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.011 qpair failed and we were unable to recover it. 00:34:09.271 [2024-07-13 08:21:00.749833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.271 [2024-07-13 08:21:00.749983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.271 [2024-07-13 08:21:00.750010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.271 [2024-07-13 08:21:00.750025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.271 [2024-07-13 08:21:00.750040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.271 [2024-07-13 08:21:00.750069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.271 qpair failed and we were unable to recover it. 00:34:09.271 [2024-07-13 08:21:00.759876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.271 [2024-07-13 08:21:00.760009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.271 [2024-07-13 08:21:00.760035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.271 [2024-07-13 08:21:00.760051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.271 [2024-07-13 08:21:00.760065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.271 [2024-07-13 08:21:00.760095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.271 qpair failed and we were unable to recover it. 00:34:09.271 [2024-07-13 08:21:00.769906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.271 [2024-07-13 08:21:00.770043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.271 [2024-07-13 08:21:00.770070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.271 [2024-07-13 08:21:00.770085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.271 [2024-07-13 08:21:00.770099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.271 [2024-07-13 08:21:00.770141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.271 qpair failed and we were unable to recover it. 00:34:09.271 [2024-07-13 08:21:00.780002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.271 [2024-07-13 08:21:00.780129] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.271 [2024-07-13 08:21:00.780167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.271 [2024-07-13 08:21:00.780182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.271 [2024-07-13 08:21:00.780196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.271 [2024-07-13 08:21:00.780226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.271 qpair failed and we were unable to recover it. 00:34:09.271 [2024-07-13 08:21:00.789956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.271 [2024-07-13 08:21:00.790086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.271 [2024-07-13 08:21:00.790112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.271 [2024-07-13 08:21:00.790128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.271 [2024-07-13 08:21:00.790142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.271 [2024-07-13 08:21:00.790179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.271 qpair failed and we were unable to recover it. 00:34:09.271 [2024-07-13 08:21:00.800003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.271 [2024-07-13 08:21:00.800144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.271 [2024-07-13 08:21:00.800170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.271 [2024-07-13 08:21:00.800185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.271 [2024-07-13 08:21:00.800199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.271 [2024-07-13 08:21:00.800231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.271 qpair failed and we were unable to recover it. 00:34:09.271 [2024-07-13 08:21:00.810024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.271 [2024-07-13 08:21:00.810160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.271 [2024-07-13 08:21:00.810188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.271 [2024-07-13 08:21:00.810203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.271 [2024-07-13 08:21:00.810222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.271 [2024-07-13 08:21:00.810280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.271 qpair failed and we were unable to recover it. 00:34:09.271 [2024-07-13 08:21:00.820076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.271 [2024-07-13 08:21:00.820215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.271 [2024-07-13 08:21:00.820244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.272 [2024-07-13 08:21:00.820261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.272 [2024-07-13 08:21:00.820276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.272 [2024-07-13 08:21:00.820307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.272 qpair failed and we were unable to recover it. 00:34:09.272 [2024-07-13 08:21:00.830102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.272 [2024-07-13 08:21:00.830225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.272 [2024-07-13 08:21:00.830252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.272 [2024-07-13 08:21:00.830268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.272 [2024-07-13 08:21:00.830282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.272 [2024-07-13 08:21:00.830312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.272 qpair failed and we were unable to recover it. 00:34:09.272 [2024-07-13 08:21:00.840140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.272 [2024-07-13 08:21:00.840269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.272 [2024-07-13 08:21:00.840296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.272 [2024-07-13 08:21:00.840311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.272 [2024-07-13 08:21:00.840325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.272 [2024-07-13 08:21:00.840369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.272 qpair failed and we were unable to recover it. 00:34:09.272 [2024-07-13 08:21:00.850131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.272 [2024-07-13 08:21:00.850306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.272 [2024-07-13 08:21:00.850332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.272 [2024-07-13 08:21:00.850347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.272 [2024-07-13 08:21:00.850361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.272 [2024-07-13 08:21:00.850391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.272 qpair failed and we were unable to recover it. 00:34:09.272 [2024-07-13 08:21:00.860179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.272 [2024-07-13 08:21:00.860320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.272 [2024-07-13 08:21:00.860346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.272 [2024-07-13 08:21:00.860361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.272 [2024-07-13 08:21:00.860375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.272 [2024-07-13 08:21:00.860405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.272 qpair failed and we were unable to recover it. 00:34:09.272 [2024-07-13 08:21:00.870269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.272 [2024-07-13 08:21:00.870432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.272 [2024-07-13 08:21:00.870459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.272 [2024-07-13 08:21:00.870474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.272 [2024-07-13 08:21:00.870488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.272 [2024-07-13 08:21:00.870519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.272 qpair failed and we were unable to recover it. 00:34:09.272 [2024-07-13 08:21:00.880222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.272 [2024-07-13 08:21:00.880353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.272 [2024-07-13 08:21:00.880380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.272 [2024-07-13 08:21:00.880395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.272 [2024-07-13 08:21:00.880409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.272 [2024-07-13 08:21:00.880439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.272 qpair failed and we were unable to recover it. 00:34:09.272 [2024-07-13 08:21:00.890258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.272 [2024-07-13 08:21:00.890384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.272 [2024-07-13 08:21:00.890410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.272 [2024-07-13 08:21:00.890425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.272 [2024-07-13 08:21:00.890439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.272 [2024-07-13 08:21:00.890469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.272 qpair failed and we were unable to recover it. 00:34:09.272 [2024-07-13 08:21:00.900299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.272 [2024-07-13 08:21:00.900424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.272 [2024-07-13 08:21:00.900452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.272 [2024-07-13 08:21:00.900477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.272 [2024-07-13 08:21:00.900492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.272 [2024-07-13 08:21:00.900537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.272 qpair failed and we were unable to recover it. 00:34:09.272 [2024-07-13 08:21:00.910350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.272 [2024-07-13 08:21:00.910503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.272 [2024-07-13 08:21:00.910530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.272 [2024-07-13 08:21:00.910546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.272 [2024-07-13 08:21:00.910560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.272 [2024-07-13 08:21:00.910590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.272 qpair failed and we were unable to recover it. 00:34:09.272 [2024-07-13 08:21:00.920353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.272 [2024-07-13 08:21:00.920534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.272 [2024-07-13 08:21:00.920561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.272 [2024-07-13 08:21:00.920577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.272 [2024-07-13 08:21:00.920591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.272 [2024-07-13 08:21:00.920622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.272 qpair failed and we were unable to recover it. 00:34:09.272 [2024-07-13 08:21:00.930359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.272 [2024-07-13 08:21:00.930488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.272 [2024-07-13 08:21:00.930514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.272 [2024-07-13 08:21:00.930529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.273 [2024-07-13 08:21:00.930542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.273 [2024-07-13 08:21:00.930572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.273 qpair failed and we were unable to recover it. 00:34:09.273 [2024-07-13 08:21:00.940370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.273 [2024-07-13 08:21:00.940498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.273 [2024-07-13 08:21:00.940525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.273 [2024-07-13 08:21:00.940541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.273 [2024-07-13 08:21:00.940555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.273 [2024-07-13 08:21:00.940601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.273 qpair failed and we were unable to recover it. 00:34:09.273 [2024-07-13 08:21:00.950384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.273 [2024-07-13 08:21:00.950537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.273 [2024-07-13 08:21:00.950563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.273 [2024-07-13 08:21:00.950578] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.273 [2024-07-13 08:21:00.950592] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.273 [2024-07-13 08:21:00.950631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.273 qpair failed and we were unable to recover it. 00:34:09.273 [2024-07-13 08:21:00.960432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.273 [2024-07-13 08:21:00.960566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.273 [2024-07-13 08:21:00.960592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.273 [2024-07-13 08:21:00.960607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.273 [2024-07-13 08:21:00.960622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.273 [2024-07-13 08:21:00.960664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.273 qpair failed and we were unable to recover it. 00:34:09.273 [2024-07-13 08:21:00.970489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.273 [2024-07-13 08:21:00.970621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.273 [2024-07-13 08:21:00.970648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.273 [2024-07-13 08:21:00.970663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.273 [2024-07-13 08:21:00.970677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.273 [2024-07-13 08:21:00.970707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.273 qpair failed and we were unable to recover it. 00:34:09.273 [2024-07-13 08:21:00.980509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.273 [2024-07-13 08:21:00.980635] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.273 [2024-07-13 08:21:00.980662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.273 [2024-07-13 08:21:00.980678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.273 [2024-07-13 08:21:00.980691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.273 [2024-07-13 08:21:00.980721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.273 qpair failed and we were unable to recover it. 00:34:09.273 [2024-07-13 08:21:00.990490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.273 [2024-07-13 08:21:00.990626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.273 [2024-07-13 08:21:00.990653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.273 [2024-07-13 08:21:00.990673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.273 [2024-07-13 08:21:00.990688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.273 [2024-07-13 08:21:00.990719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.273 qpair failed and we were unable to recover it. 00:34:09.273 [2024-07-13 08:21:01.000528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.273 [2024-07-13 08:21:01.000664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.273 [2024-07-13 08:21:01.000690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.273 [2024-07-13 08:21:01.000705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.273 [2024-07-13 08:21:01.000721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.273 [2024-07-13 08:21:01.000751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.273 qpair failed and we were unable to recover it. 00:34:09.533 [2024-07-13 08:21:01.010567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.533 [2024-07-13 08:21:01.010699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.533 [2024-07-13 08:21:01.010726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.533 [2024-07-13 08:21:01.010750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.533 [2024-07-13 08:21:01.010763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.533 [2024-07-13 08:21:01.010806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.533 qpair failed and we were unable to recover it. 00:34:09.533 [2024-07-13 08:21:01.020595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.533 [2024-07-13 08:21:01.020727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.533 [2024-07-13 08:21:01.020753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.533 [2024-07-13 08:21:01.020769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.533 [2024-07-13 08:21:01.020784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.533 [2024-07-13 08:21:01.020814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.533 qpair failed and we were unable to recover it. 00:34:09.533 [2024-07-13 08:21:01.030649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.533 [2024-07-13 08:21:01.030801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.533 [2024-07-13 08:21:01.030827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.533 [2024-07-13 08:21:01.030842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.534 [2024-07-13 08:21:01.030870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.534 [2024-07-13 08:21:01.030903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.534 qpair failed and we were unable to recover it. 00:34:09.534 [2024-07-13 08:21:01.040671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.534 [2024-07-13 08:21:01.040807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.534 [2024-07-13 08:21:01.040833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.534 [2024-07-13 08:21:01.040859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.534 [2024-07-13 08:21:01.040881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.534 [2024-07-13 08:21:01.040913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.534 qpair failed and we were unable to recover it. 00:34:09.534 [2024-07-13 08:21:01.050697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.534 [2024-07-13 08:21:01.050879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.534 [2024-07-13 08:21:01.050906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.534 [2024-07-13 08:21:01.050921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.534 [2024-07-13 08:21:01.050935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.534 [2024-07-13 08:21:01.050965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.534 qpair failed and we were unable to recover it. 00:34:09.534 [2024-07-13 08:21:01.060707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.534 [2024-07-13 08:21:01.060864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.534 [2024-07-13 08:21:01.060897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.534 [2024-07-13 08:21:01.060912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.534 [2024-07-13 08:21:01.060926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.534 [2024-07-13 08:21:01.060969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.534 qpair failed and we were unable to recover it. 00:34:09.534 [2024-07-13 08:21:01.070782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.534 [2024-07-13 08:21:01.070928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.534 [2024-07-13 08:21:01.070956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.534 [2024-07-13 08:21:01.070971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.534 [2024-07-13 08:21:01.070986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.534 [2024-07-13 08:21:01.071016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.534 qpair failed and we were unable to recover it. 00:34:09.534 [2024-07-13 08:21:01.080799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.534 [2024-07-13 08:21:01.080945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.534 [2024-07-13 08:21:01.080977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.534 [2024-07-13 08:21:01.080993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.534 [2024-07-13 08:21:01.081006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.534 [2024-07-13 08:21:01.081036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.534 qpair failed and we were unable to recover it. 00:34:09.534 [2024-07-13 08:21:01.090789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.534 [2024-07-13 08:21:01.090917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.534 [2024-07-13 08:21:01.090943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.534 [2024-07-13 08:21:01.090957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.534 [2024-07-13 08:21:01.090970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.534 [2024-07-13 08:21:01.091000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.534 qpair failed and we were unable to recover it. 00:34:09.534 [2024-07-13 08:21:01.100922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.534 [2024-07-13 08:21:01.101064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.534 [2024-07-13 08:21:01.101091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.534 [2024-07-13 08:21:01.101107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.534 [2024-07-13 08:21:01.101121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.534 [2024-07-13 08:21:01.101152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.534 qpair failed and we were unable to recover it. 00:34:09.534 [2024-07-13 08:21:01.110847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.534 [2024-07-13 08:21:01.110983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.534 [2024-07-13 08:21:01.111009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.534 [2024-07-13 08:21:01.111024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.534 [2024-07-13 08:21:01.111039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.534 [2024-07-13 08:21:01.111069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.534 qpair failed and we were unable to recover it. 00:34:09.534 [2024-07-13 08:21:01.120910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.534 [2024-07-13 08:21:01.121040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.534 [2024-07-13 08:21:01.121067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.534 [2024-07-13 08:21:01.121082] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.534 [2024-07-13 08:21:01.121096] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.534 [2024-07-13 08:21:01.121133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.534 qpair failed and we were unable to recover it. 00:34:09.534 [2024-07-13 08:21:01.130910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.534 [2024-07-13 08:21:01.131042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.534 [2024-07-13 08:21:01.131068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.534 [2024-07-13 08:21:01.131083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.534 [2024-07-13 08:21:01.131097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.534 [2024-07-13 08:21:01.131127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.534 qpair failed and we were unable to recover it. 00:34:09.534 [2024-07-13 08:21:01.140983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.534 [2024-07-13 08:21:01.141106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.535 [2024-07-13 08:21:01.141132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.535 [2024-07-13 08:21:01.141147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.535 [2024-07-13 08:21:01.141161] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.535 [2024-07-13 08:21:01.141191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.535 qpair failed and we were unable to recover it. 00:34:09.535 [2024-07-13 08:21:01.150960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.535 [2024-07-13 08:21:01.151080] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.535 [2024-07-13 08:21:01.151107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.535 [2024-07-13 08:21:01.151122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.535 [2024-07-13 08:21:01.151136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.535 [2024-07-13 08:21:01.151166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.535 qpair failed and we were unable to recover it. 00:34:09.535 [2024-07-13 08:21:01.161009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.535 [2024-07-13 08:21:01.161138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.535 [2024-07-13 08:21:01.161165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.535 [2024-07-13 08:21:01.161180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.535 [2024-07-13 08:21:01.161197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.535 [2024-07-13 08:21:01.161227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.535 qpair failed and we were unable to recover it. 00:34:09.535 [2024-07-13 08:21:01.171117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.535 [2024-07-13 08:21:01.171242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.535 [2024-07-13 08:21:01.171273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.535 [2024-07-13 08:21:01.171289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.535 [2024-07-13 08:21:01.171304] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.535 [2024-07-13 08:21:01.171334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.535 qpair failed and we were unable to recover it. 00:34:09.535 [2024-07-13 08:21:01.181066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.535 [2024-07-13 08:21:01.181195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.535 [2024-07-13 08:21:01.181223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.535 [2024-07-13 08:21:01.181238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.535 [2024-07-13 08:21:01.181261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.535 [2024-07-13 08:21:01.181293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.535 qpair failed and we were unable to recover it. 00:34:09.535 [2024-07-13 08:21:01.191102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.535 [2024-07-13 08:21:01.191251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.535 [2024-07-13 08:21:01.191278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.535 [2024-07-13 08:21:01.191309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.535 [2024-07-13 08:21:01.191322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.535 [2024-07-13 08:21:01.191380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.535 qpair failed and we were unable to recover it. 00:34:09.535 [2024-07-13 08:21:01.201174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.535 [2024-07-13 08:21:01.201309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.535 [2024-07-13 08:21:01.201335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.535 [2024-07-13 08:21:01.201352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.535 [2024-07-13 08:21:01.201365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.535 [2024-07-13 08:21:01.201396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.535 qpair failed and we were unable to recover it. 00:34:09.535 [2024-07-13 08:21:01.211185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.535 [2024-07-13 08:21:01.211308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.535 [2024-07-13 08:21:01.211334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.535 [2024-07-13 08:21:01.211349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.535 [2024-07-13 08:21:01.211369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.535 [2024-07-13 08:21:01.211401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.535 qpair failed and we were unable to recover it. 00:34:09.535 [2024-07-13 08:21:01.221187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.535 [2024-07-13 08:21:01.221325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.535 [2024-07-13 08:21:01.221351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.535 [2024-07-13 08:21:01.221366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.535 [2024-07-13 08:21:01.221380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.535 [2024-07-13 08:21:01.221411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.535 qpair failed and we were unable to recover it. 00:34:09.535 [2024-07-13 08:21:01.231198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.535 [2024-07-13 08:21:01.231321] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.535 [2024-07-13 08:21:01.231347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.535 [2024-07-13 08:21:01.231363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.535 [2024-07-13 08:21:01.231376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.535 [2024-07-13 08:21:01.231406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.535 qpair failed and we were unable to recover it. 00:34:09.535 [2024-07-13 08:21:01.241228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.535 [2024-07-13 08:21:01.241405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.535 [2024-07-13 08:21:01.241431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.535 [2024-07-13 08:21:01.241446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.535 [2024-07-13 08:21:01.241461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.535 [2024-07-13 08:21:01.241492] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.535 qpair failed and we were unable to recover it. 00:34:09.535 [2024-07-13 08:21:01.251314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.536 [2024-07-13 08:21:01.251466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.536 [2024-07-13 08:21:01.251492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.536 [2024-07-13 08:21:01.251507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.536 [2024-07-13 08:21:01.251521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.536 [2024-07-13 08:21:01.251565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.536 qpair failed and we were unable to recover it. 00:34:09.536 [2024-07-13 08:21:01.261305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.536 [2024-07-13 08:21:01.261433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.536 [2024-07-13 08:21:01.261459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.536 [2024-07-13 08:21:01.261474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.536 [2024-07-13 08:21:01.261488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.536 [2024-07-13 08:21:01.261518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.536 qpair failed and we were unable to recover it. 00:34:09.796 [2024-07-13 08:21:01.271344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.796 [2024-07-13 08:21:01.271462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.796 [2024-07-13 08:21:01.271489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.796 [2024-07-13 08:21:01.271504] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.796 [2024-07-13 08:21:01.271517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.796 [2024-07-13 08:21:01.271548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.796 qpair failed and we were unable to recover it. 00:34:09.796 [2024-07-13 08:21:01.281445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.796 [2024-07-13 08:21:01.281597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.796 [2024-07-13 08:21:01.281624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.796 [2024-07-13 08:21:01.281639] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.796 [2024-07-13 08:21:01.281653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.796 [2024-07-13 08:21:01.281683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.796 qpair failed and we were unable to recover it. 00:34:09.796 [2024-07-13 08:21:01.291425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.796 [2024-07-13 08:21:01.291553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.796 [2024-07-13 08:21:01.291580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.796 [2024-07-13 08:21:01.291599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.796 [2024-07-13 08:21:01.291613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.796 [2024-07-13 08:21:01.291643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.796 qpair failed and we were unable to recover it. 00:34:09.796 [2024-07-13 08:21:01.301460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.796 [2024-07-13 08:21:01.301577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.796 [2024-07-13 08:21:01.301603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.796 [2024-07-13 08:21:01.301625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.796 [2024-07-13 08:21:01.301640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.796 [2024-07-13 08:21:01.301683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.796 qpair failed and we were unable to recover it. 00:34:09.796 [2024-07-13 08:21:01.311576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.796 [2024-07-13 08:21:01.311725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.796 [2024-07-13 08:21:01.311752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.796 [2024-07-13 08:21:01.311768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.796 [2024-07-13 08:21:01.311783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.796 [2024-07-13 08:21:01.311813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.796 qpair failed and we were unable to recover it. 00:34:09.796 [2024-07-13 08:21:01.321519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.796 [2024-07-13 08:21:01.321656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.796 [2024-07-13 08:21:01.321683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.796 [2024-07-13 08:21:01.321699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.796 [2024-07-13 08:21:01.321713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.796 [2024-07-13 08:21:01.321744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.796 qpair failed and we were unable to recover it. 00:34:09.796 [2024-07-13 08:21:01.331489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.796 [2024-07-13 08:21:01.331624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.796 [2024-07-13 08:21:01.331651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.796 [2024-07-13 08:21:01.331666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.796 [2024-07-13 08:21:01.331680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.796 [2024-07-13 08:21:01.331710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.796 qpair failed and we were unable to recover it. 00:34:09.796 [2024-07-13 08:21:01.341528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.797 [2024-07-13 08:21:01.341663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.797 [2024-07-13 08:21:01.341689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.797 [2024-07-13 08:21:01.341704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.797 [2024-07-13 08:21:01.341718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.797 [2024-07-13 08:21:01.341749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.797 qpair failed and we were unable to recover it. 00:34:09.797 [2024-07-13 08:21:01.351614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.797 [2024-07-13 08:21:01.351741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.797 [2024-07-13 08:21:01.351768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.797 [2024-07-13 08:21:01.351784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.797 [2024-07-13 08:21:01.351797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.797 [2024-07-13 08:21:01.351828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.797 qpair failed and we were unable to recover it. 00:34:09.797 [2024-07-13 08:21:01.361600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.797 [2024-07-13 08:21:01.361738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.797 [2024-07-13 08:21:01.361764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.797 [2024-07-13 08:21:01.361779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.797 [2024-07-13 08:21:01.361793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.797 [2024-07-13 08:21:01.361824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.797 qpair failed and we were unable to recover it. 00:34:09.797 [2024-07-13 08:21:01.371708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.797 [2024-07-13 08:21:01.371846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.797 [2024-07-13 08:21:01.371879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.797 [2024-07-13 08:21:01.371907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.797 [2024-07-13 08:21:01.371922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.797 [2024-07-13 08:21:01.371954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.797 qpair failed and we were unable to recover it. 00:34:09.797 [2024-07-13 08:21:01.381663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.797 [2024-07-13 08:21:01.381792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.797 [2024-07-13 08:21:01.381818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.797 [2024-07-13 08:21:01.381833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.797 [2024-07-13 08:21:01.381858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.797 [2024-07-13 08:21:01.381897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.797 qpair failed and we were unable to recover it. 00:34:09.797 [2024-07-13 08:21:01.391686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.797 [2024-07-13 08:21:01.391825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.797 [2024-07-13 08:21:01.391862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.797 [2024-07-13 08:21:01.391894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.797 [2024-07-13 08:21:01.391909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.797 [2024-07-13 08:21:01.391940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.797 qpair failed and we were unable to recover it. 00:34:09.797 [2024-07-13 08:21:01.401788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.797 [2024-07-13 08:21:01.401949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.797 [2024-07-13 08:21:01.401975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.797 [2024-07-13 08:21:01.401990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.797 [2024-07-13 08:21:01.402005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.797 [2024-07-13 08:21:01.402035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.797 qpair failed and we were unable to recover it. 00:34:09.797 [2024-07-13 08:21:01.411722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.797 [2024-07-13 08:21:01.411858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.797 [2024-07-13 08:21:01.411891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.797 [2024-07-13 08:21:01.411907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.797 [2024-07-13 08:21:01.411921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.797 [2024-07-13 08:21:01.411951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.797 qpair failed and we were unable to recover it. 00:34:09.797 [2024-07-13 08:21:01.421796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.797 [2024-07-13 08:21:01.421960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.797 [2024-07-13 08:21:01.421988] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.797 [2024-07-13 08:21:01.422004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.797 [2024-07-13 08:21:01.422023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.797 [2024-07-13 08:21:01.422055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.797 qpair failed and we were unable to recover it. 00:34:09.797 [2024-07-13 08:21:01.431879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.797 [2024-07-13 08:21:01.432008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.797 [2024-07-13 08:21:01.432035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.797 [2024-07-13 08:21:01.432051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.797 [2024-07-13 08:21:01.432065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.797 [2024-07-13 08:21:01.432095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.797 qpair failed and we were unable to recover it. 00:34:09.797 [2024-07-13 08:21:01.441833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.797 [2024-07-13 08:21:01.441970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.797 [2024-07-13 08:21:01.441997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.797 [2024-07-13 08:21:01.442012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.797 [2024-07-13 08:21:01.442027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.797 [2024-07-13 08:21:01.442058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.797 qpair failed and we were unable to recover it. 00:34:09.797 [2024-07-13 08:21:01.451914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.798 [2024-07-13 08:21:01.452042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.798 [2024-07-13 08:21:01.452070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.798 [2024-07-13 08:21:01.452085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.798 [2024-07-13 08:21:01.452099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.798 [2024-07-13 08:21:01.452130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.798 qpair failed and we were unable to recover it. 00:34:09.798 [2024-07-13 08:21:01.461859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.798 [2024-07-13 08:21:01.461990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.798 [2024-07-13 08:21:01.462016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.798 [2024-07-13 08:21:01.462032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.798 [2024-07-13 08:21:01.462046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.798 [2024-07-13 08:21:01.462076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.798 qpair failed and we were unable to recover it. 00:34:09.798 [2024-07-13 08:21:01.471922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.798 [2024-07-13 08:21:01.472066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.798 [2024-07-13 08:21:01.472092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.798 [2024-07-13 08:21:01.472107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.798 [2024-07-13 08:21:01.472122] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.798 [2024-07-13 08:21:01.472163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.798 qpair failed and we were unable to recover it. 00:34:09.798 [2024-07-13 08:21:01.481988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.798 [2024-07-13 08:21:01.482124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.798 [2024-07-13 08:21:01.482162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.798 [2024-07-13 08:21:01.482178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.798 [2024-07-13 08:21:01.482192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.798 [2024-07-13 08:21:01.482224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.798 qpair failed and we were unable to recover it. 00:34:09.798 [2024-07-13 08:21:01.491980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.798 [2024-07-13 08:21:01.492121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.798 [2024-07-13 08:21:01.492148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.798 [2024-07-13 08:21:01.492163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.798 [2024-07-13 08:21:01.492177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.798 [2024-07-13 08:21:01.492207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.798 qpair failed and we were unable to recover it. 00:34:09.798 [2024-07-13 08:21:01.502023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.798 [2024-07-13 08:21:01.502163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.798 [2024-07-13 08:21:01.502190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.798 [2024-07-13 08:21:01.502206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.798 [2024-07-13 08:21:01.502231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.798 [2024-07-13 08:21:01.502277] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.798 qpair failed and we were unable to recover it. 00:34:09.798 [2024-07-13 08:21:01.512023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.798 [2024-07-13 08:21:01.512162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.798 [2024-07-13 08:21:01.512188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.798 [2024-07-13 08:21:01.512203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.798 [2024-07-13 08:21:01.512217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.798 [2024-07-13 08:21:01.512247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.798 qpair failed and we were unable to recover it. 00:34:09.798 [2024-07-13 08:21:01.522112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:09.798 [2024-07-13 08:21:01.522250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:09.798 [2024-07-13 08:21:01.522280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:09.798 [2024-07-13 08:21:01.522296] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:09.798 [2024-07-13 08:21:01.522326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:09.798 [2024-07-13 08:21:01.522364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:09.798 qpair failed and we were unable to recover it. 00:34:10.058 [2024-07-13 08:21:01.532066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.058 [2024-07-13 08:21:01.532192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.058 [2024-07-13 08:21:01.532219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.058 [2024-07-13 08:21:01.532235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.058 [2024-07-13 08:21:01.532249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.058 [2024-07-13 08:21:01.532280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.058 qpair failed and we were unable to recover it. 00:34:10.058 [2024-07-13 08:21:01.542132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.058 [2024-07-13 08:21:01.542311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.058 [2024-07-13 08:21:01.542338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.058 [2024-07-13 08:21:01.542353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.058 [2024-07-13 08:21:01.542366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.058 [2024-07-13 08:21:01.542395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.058 qpair failed and we were unable to recover it. 00:34:10.058 [2024-07-13 08:21:01.552229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.058 [2024-07-13 08:21:01.552356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.058 [2024-07-13 08:21:01.552382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.058 [2024-07-13 08:21:01.552397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.058 [2024-07-13 08:21:01.552411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.058 [2024-07-13 08:21:01.552440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.058 qpair failed and we were unable to recover it. 00:34:10.058 [2024-07-13 08:21:01.562305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.058 [2024-07-13 08:21:01.562490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.058 [2024-07-13 08:21:01.562516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.058 [2024-07-13 08:21:01.562532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.058 [2024-07-13 08:21:01.562546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.058 [2024-07-13 08:21:01.562576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.058 qpair failed and we were unable to recover it. 00:34:10.058 [2024-07-13 08:21:01.572213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.058 [2024-07-13 08:21:01.572356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.058 [2024-07-13 08:21:01.572390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.058 [2024-07-13 08:21:01.572409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.058 [2024-07-13 08:21:01.572423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.058 [2024-07-13 08:21:01.572454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.058 qpair failed and we were unable to recover it. 00:34:10.058 [2024-07-13 08:21:01.582277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.058 [2024-07-13 08:21:01.582441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.058 [2024-07-13 08:21:01.582466] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.058 [2024-07-13 08:21:01.582481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.058 [2024-07-13 08:21:01.582494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.058 [2024-07-13 08:21:01.582524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.058 qpair failed and we were unable to recover it. 00:34:10.058 [2024-07-13 08:21:01.592332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.058 [2024-07-13 08:21:01.592521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.058 [2024-07-13 08:21:01.592549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.058 [2024-07-13 08:21:01.592564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.058 [2024-07-13 08:21:01.592578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.059 [2024-07-13 08:21:01.592609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.059 qpair failed and we were unable to recover it. 00:34:10.059 [2024-07-13 08:21:01.602327] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.059 [2024-07-13 08:21:01.602462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.059 [2024-07-13 08:21:01.602488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.059 [2024-07-13 08:21:01.602503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.059 [2024-07-13 08:21:01.602518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.059 [2024-07-13 08:21:01.602549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.059 qpair failed and we were unable to recover it. 00:34:10.059 [2024-07-13 08:21:01.612365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.059 [2024-07-13 08:21:01.612496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.059 [2024-07-13 08:21:01.612522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.059 [2024-07-13 08:21:01.612537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.059 [2024-07-13 08:21:01.612557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.059 [2024-07-13 08:21:01.612589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.059 qpair failed and we were unable to recover it. 00:34:10.059 [2024-07-13 08:21:01.622375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.059 [2024-07-13 08:21:01.622505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.059 [2024-07-13 08:21:01.622531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.059 [2024-07-13 08:21:01.622545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.059 [2024-07-13 08:21:01.622559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.059 [2024-07-13 08:21:01.622589] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.059 qpair failed and we were unable to recover it. 00:34:10.059 [2024-07-13 08:21:01.632377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.059 [2024-07-13 08:21:01.632507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.059 [2024-07-13 08:21:01.632533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.059 [2024-07-13 08:21:01.632549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.059 [2024-07-13 08:21:01.632563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.059 [2024-07-13 08:21:01.632593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.059 qpair failed and we were unable to recover it. 00:34:10.059 [2024-07-13 08:21:01.642432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.059 [2024-07-13 08:21:01.642590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.059 [2024-07-13 08:21:01.642617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.059 [2024-07-13 08:21:01.642632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.059 [2024-07-13 08:21:01.642647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.059 [2024-07-13 08:21:01.642677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.059 qpair failed and we were unable to recover it. 00:34:10.059 [2024-07-13 08:21:01.652465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.059 [2024-07-13 08:21:01.652600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.059 [2024-07-13 08:21:01.652627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.059 [2024-07-13 08:21:01.652643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.059 [2024-07-13 08:21:01.652657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.059 [2024-07-13 08:21:01.652687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.059 qpair failed and we were unable to recover it. 00:34:10.059 [2024-07-13 08:21:01.662546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.059 [2024-07-13 08:21:01.662676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.059 [2024-07-13 08:21:01.662703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.059 [2024-07-13 08:21:01.662718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.059 [2024-07-13 08:21:01.662732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.059 [2024-07-13 08:21:01.662764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.059 qpair failed and we were unable to recover it. 00:34:10.059 [2024-07-13 08:21:01.672489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.059 [2024-07-13 08:21:01.672661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.059 [2024-07-13 08:21:01.672686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.059 [2024-07-13 08:21:01.672702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.059 [2024-07-13 08:21:01.672716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.059 [2024-07-13 08:21:01.672746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.059 qpair failed and we were unable to recover it. 00:34:10.059 [2024-07-13 08:21:01.682502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.059 [2024-07-13 08:21:01.682634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.059 [2024-07-13 08:21:01.682660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.059 [2024-07-13 08:21:01.682675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.059 [2024-07-13 08:21:01.682688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.059 [2024-07-13 08:21:01.682720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.059 qpair failed and we were unable to recover it. 00:34:10.059 [2024-07-13 08:21:01.692537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.059 [2024-07-13 08:21:01.692718] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.059 [2024-07-13 08:21:01.692745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.059 [2024-07-13 08:21:01.692760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.059 [2024-07-13 08:21:01.692774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.059 [2024-07-13 08:21:01.692803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.059 qpair failed and we were unable to recover it. 00:34:10.059 [2024-07-13 08:21:01.702597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.059 [2024-07-13 08:21:01.702733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.059 [2024-07-13 08:21:01.702759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.059 [2024-07-13 08:21:01.702773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.059 [2024-07-13 08:21:01.702792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.059 [2024-07-13 08:21:01.702823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.059 qpair failed and we were unable to recover it. 00:34:10.059 [2024-07-13 08:21:01.712598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.059 [2024-07-13 08:21:01.712727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.059 [2024-07-13 08:21:01.712753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.059 [2024-07-13 08:21:01.712769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.059 [2024-07-13 08:21:01.712782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.059 [2024-07-13 08:21:01.712812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.059 qpair failed and we were unable to recover it. 00:34:10.059 [2024-07-13 08:21:01.722640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.059 [2024-07-13 08:21:01.722831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.059 [2024-07-13 08:21:01.722872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.059 [2024-07-13 08:21:01.722890] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.059 [2024-07-13 08:21:01.722905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.059 [2024-07-13 08:21:01.722936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.059 qpair failed and we were unable to recover it. 00:34:10.059 [2024-07-13 08:21:01.732654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.059 [2024-07-13 08:21:01.732788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.059 [2024-07-13 08:21:01.732814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.059 [2024-07-13 08:21:01.732829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.059 [2024-07-13 08:21:01.732843] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.059 [2024-07-13 08:21:01.732891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.059 qpair failed and we were unable to recover it. 00:34:10.059 [2024-07-13 08:21:01.742671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.060 [2024-07-13 08:21:01.742794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.060 [2024-07-13 08:21:01.742820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.060 [2024-07-13 08:21:01.742835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.060 [2024-07-13 08:21:01.742860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.060 [2024-07-13 08:21:01.742897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.060 qpair failed and we were unable to recover it. 00:34:10.060 [2024-07-13 08:21:01.752701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.060 [2024-07-13 08:21:01.752825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.060 [2024-07-13 08:21:01.752851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.060 [2024-07-13 08:21:01.752873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.060 [2024-07-13 08:21:01.752889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.060 [2024-07-13 08:21:01.752920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.060 qpair failed and we were unable to recover it. 00:34:10.060 [2024-07-13 08:21:01.762733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.060 [2024-07-13 08:21:01.762860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.060 [2024-07-13 08:21:01.762902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.060 [2024-07-13 08:21:01.762917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.060 [2024-07-13 08:21:01.762931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.060 [2024-07-13 08:21:01.762973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.060 qpair failed and we were unable to recover it. 00:34:10.060 [2024-07-13 08:21:01.772760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.060 [2024-07-13 08:21:01.772888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.060 [2024-07-13 08:21:01.772915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.060 [2024-07-13 08:21:01.772930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.060 [2024-07-13 08:21:01.772943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.060 [2024-07-13 08:21:01.772974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.060 qpair failed and we were unable to recover it. 00:34:10.060 [2024-07-13 08:21:01.782756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.060 [2024-07-13 08:21:01.782887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.060 [2024-07-13 08:21:01.782913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.060 [2024-07-13 08:21:01.782928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.060 [2024-07-13 08:21:01.782941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.060 [2024-07-13 08:21:01.782971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.060 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-13 08:21:01.792806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.321 [2024-07-13 08:21:01.792942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.321 [2024-07-13 08:21:01.792969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.321 [2024-07-13 08:21:01.792991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.321 [2024-07-13 08:21:01.793007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.321 [2024-07-13 08:21:01.793051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-13 08:21:01.802843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.321 [2024-07-13 08:21:01.802982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.321 [2024-07-13 08:21:01.803009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.321 [2024-07-13 08:21:01.803025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.321 [2024-07-13 08:21:01.803039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.321 [2024-07-13 08:21:01.803069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-13 08:21:01.812850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.321 [2024-07-13 08:21:01.812979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.321 [2024-07-13 08:21:01.813006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.321 [2024-07-13 08:21:01.813021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.321 [2024-07-13 08:21:01.813035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.321 [2024-07-13 08:21:01.813065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-13 08:21:01.822915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.321 [2024-07-13 08:21:01.823064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.321 [2024-07-13 08:21:01.823090] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.321 [2024-07-13 08:21:01.823105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.321 [2024-07-13 08:21:01.823119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.321 [2024-07-13 08:21:01.823153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-13 08:21:01.832913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.321 [2024-07-13 08:21:01.833052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.321 [2024-07-13 08:21:01.833078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.321 [2024-07-13 08:21:01.833094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.321 [2024-07-13 08:21:01.833107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.321 [2024-07-13 08:21:01.833137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-13 08:21:01.842953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.321 [2024-07-13 08:21:01.843085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.321 [2024-07-13 08:21:01.843113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.321 [2024-07-13 08:21:01.843128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.321 [2024-07-13 08:21:01.843142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.321 [2024-07-13 08:21:01.843171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.321 qpair failed and we were unable to recover it. 00:34:10.321 [2024-07-13 08:21:01.853065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.322 [2024-07-13 08:21:01.853229] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.322 [2024-07-13 08:21:01.853257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.322 [2024-07-13 08:21:01.853272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.322 [2024-07-13 08:21:01.853300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.322 [2024-07-13 08:21:01.853331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-13 08:21:01.863049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.322 [2024-07-13 08:21:01.863176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.322 [2024-07-13 08:21:01.863212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.322 [2024-07-13 08:21:01.863227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.322 [2024-07-13 08:21:01.863256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.322 [2024-07-13 08:21:01.863286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-13 08:21:01.873043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.322 [2024-07-13 08:21:01.873164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.322 [2024-07-13 08:21:01.873192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.322 [2024-07-13 08:21:01.873207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.322 [2024-07-13 08:21:01.873220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.322 [2024-07-13 08:21:01.873261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-13 08:21:01.883106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.322 [2024-07-13 08:21:01.883257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.322 [2024-07-13 08:21:01.883289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.322 [2024-07-13 08:21:01.883306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.322 [2024-07-13 08:21:01.883319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.322 [2024-07-13 08:21:01.883349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-13 08:21:01.893104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.322 [2024-07-13 08:21:01.893241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.322 [2024-07-13 08:21:01.893267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.322 [2024-07-13 08:21:01.893283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.322 [2024-07-13 08:21:01.893296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.322 [2024-07-13 08:21:01.893326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-13 08:21:01.903150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.322 [2024-07-13 08:21:01.903310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.322 [2024-07-13 08:21:01.903337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.322 [2024-07-13 08:21:01.903353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.322 [2024-07-13 08:21:01.903366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.322 [2024-07-13 08:21:01.903395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-13 08:21:01.913226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.322 [2024-07-13 08:21:01.913348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.322 [2024-07-13 08:21:01.913374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.322 [2024-07-13 08:21:01.913390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.322 [2024-07-13 08:21:01.913404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.322 [2024-07-13 08:21:01.913434] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-13 08:21:01.923219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.322 [2024-07-13 08:21:01.923392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.322 [2024-07-13 08:21:01.923418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.322 [2024-07-13 08:21:01.923433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.322 [2024-07-13 08:21:01.923446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.322 [2024-07-13 08:21:01.923483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-13 08:21:01.933291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.322 [2024-07-13 08:21:01.933417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.322 [2024-07-13 08:21:01.933443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.322 [2024-07-13 08:21:01.933458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.322 [2024-07-13 08:21:01.933473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.322 [2024-07-13 08:21:01.933503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-13 08:21:01.943218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.322 [2024-07-13 08:21:01.943337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.322 [2024-07-13 08:21:01.943363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.322 [2024-07-13 08:21:01.943378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.322 [2024-07-13 08:21:01.943392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.322 [2024-07-13 08:21:01.943422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-13 08:21:01.953241] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.322 [2024-07-13 08:21:01.953364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.322 [2024-07-13 08:21:01.953390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.322 [2024-07-13 08:21:01.953405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.322 [2024-07-13 08:21:01.953419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.322 [2024-07-13 08:21:01.953449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-13 08:21:01.963345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.322 [2024-07-13 08:21:01.963495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.322 [2024-07-13 08:21:01.963521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.322 [2024-07-13 08:21:01.963536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.322 [2024-07-13 08:21:01.963550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.322 [2024-07-13 08:21:01.963580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-13 08:21:01.973404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.322 [2024-07-13 08:21:01.973529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.322 [2024-07-13 08:21:01.973561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.322 [2024-07-13 08:21:01.973577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.322 [2024-07-13 08:21:01.973591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.322 [2024-07-13 08:21:01.973621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-13 08:21:01.983361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.322 [2024-07-13 08:21:01.983492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.322 [2024-07-13 08:21:01.983519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.322 [2024-07-13 08:21:01.983535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.322 [2024-07-13 08:21:01.983549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.322 [2024-07-13 08:21:01.983579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.322 qpair failed and we were unable to recover it. 00:34:10.322 [2024-07-13 08:21:01.993378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.322 [2024-07-13 08:21:01.993505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.322 [2024-07-13 08:21:01.993532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.323 [2024-07-13 08:21:01.993548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.323 [2024-07-13 08:21:01.993562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.323 [2024-07-13 08:21:01.993591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-13 08:21:02.003452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.323 [2024-07-13 08:21:02.003582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.323 [2024-07-13 08:21:02.003609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.323 [2024-07-13 08:21:02.003625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.323 [2024-07-13 08:21:02.003639] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.323 [2024-07-13 08:21:02.003668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-13 08:21:02.013472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.323 [2024-07-13 08:21:02.013600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.323 [2024-07-13 08:21:02.013627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.323 [2024-07-13 08:21:02.013642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.323 [2024-07-13 08:21:02.013661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.323 [2024-07-13 08:21:02.013705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-13 08:21:02.023450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.323 [2024-07-13 08:21:02.023572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.323 [2024-07-13 08:21:02.023598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.323 [2024-07-13 08:21:02.023614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.323 [2024-07-13 08:21:02.023627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.323 [2024-07-13 08:21:02.023657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-13 08:21:02.033593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.323 [2024-07-13 08:21:02.033735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.323 [2024-07-13 08:21:02.033764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.323 [2024-07-13 08:21:02.033783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.323 [2024-07-13 08:21:02.033797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.323 [2024-07-13 08:21:02.033829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.323 [2024-07-13 08:21:02.043520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.323 [2024-07-13 08:21:02.043662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.323 [2024-07-13 08:21:02.043690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.323 [2024-07-13 08:21:02.043706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.323 [2024-07-13 08:21:02.043720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.323 [2024-07-13 08:21:02.043750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.323 qpair failed and we were unable to recover it. 00:34:10.583 [2024-07-13 08:21:02.053548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.583 [2024-07-13 08:21:02.053681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.583 [2024-07-13 08:21:02.053708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.583 [2024-07-13 08:21:02.053725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.583 [2024-07-13 08:21:02.053739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.583 [2024-07-13 08:21:02.053782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.583 qpair failed and we were unable to recover it. 00:34:10.583 [2024-07-13 08:21:02.063575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.583 [2024-07-13 08:21:02.063727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.583 [2024-07-13 08:21:02.063755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.583 [2024-07-13 08:21:02.063770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.583 [2024-07-13 08:21:02.063784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.583 [2024-07-13 08:21:02.063814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.583 qpair failed and we were unable to recover it. 00:34:10.583 [2024-07-13 08:21:02.073592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.583 [2024-07-13 08:21:02.073717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.583 [2024-07-13 08:21:02.073744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.583 [2024-07-13 08:21:02.073759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.583 [2024-07-13 08:21:02.073773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.583 [2024-07-13 08:21:02.073803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.583 qpair failed and we were unable to recover it. 00:34:10.583 [2024-07-13 08:21:02.083657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.583 [2024-07-13 08:21:02.083814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.583 [2024-07-13 08:21:02.083842] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.583 [2024-07-13 08:21:02.083857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.583 [2024-07-13 08:21:02.083878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.583 [2024-07-13 08:21:02.083910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.583 qpair failed and we were unable to recover it. 00:34:10.583 [2024-07-13 08:21:02.093682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.583 [2024-07-13 08:21:02.093844] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.583 [2024-07-13 08:21:02.093878] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.583 [2024-07-13 08:21:02.093896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.583 [2024-07-13 08:21:02.093910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.583 [2024-07-13 08:21:02.093941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.583 qpair failed and we were unable to recover it. 00:34:10.583 [2024-07-13 08:21:02.103675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.583 [2024-07-13 08:21:02.103797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.583 [2024-07-13 08:21:02.103823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.583 [2024-07-13 08:21:02.103839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.583 [2024-07-13 08:21:02.103862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.583 [2024-07-13 08:21:02.103901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.583 qpair failed and we were unable to recover it. 00:34:10.583 [2024-07-13 08:21:02.113702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.583 [2024-07-13 08:21:02.113829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.584 [2024-07-13 08:21:02.113856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.584 [2024-07-13 08:21:02.113880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.584 [2024-07-13 08:21:02.113896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.584 [2024-07-13 08:21:02.113939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.584 qpair failed and we were unable to recover it. 00:34:10.584 [2024-07-13 08:21:02.123733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.584 [2024-07-13 08:21:02.123859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.584 [2024-07-13 08:21:02.123893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.584 [2024-07-13 08:21:02.123909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.584 [2024-07-13 08:21:02.123923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.584 [2024-07-13 08:21:02.123953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.584 qpair failed and we were unable to recover it. 00:34:10.584 [2024-07-13 08:21:02.133899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.584 [2024-07-13 08:21:02.134036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.584 [2024-07-13 08:21:02.134062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.584 [2024-07-13 08:21:02.134077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.584 [2024-07-13 08:21:02.134090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.584 [2024-07-13 08:21:02.134120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.584 qpair failed and we were unable to recover it. 00:34:10.584 [2024-07-13 08:21:02.143819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.584 [2024-07-13 08:21:02.143951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.584 [2024-07-13 08:21:02.143978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.584 [2024-07-13 08:21:02.143993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.584 [2024-07-13 08:21:02.144008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.584 [2024-07-13 08:21:02.144038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.584 qpair failed and we were unable to recover it. 00:34:10.584 [2024-07-13 08:21:02.153909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.584 [2024-07-13 08:21:02.154051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.584 [2024-07-13 08:21:02.154077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.584 [2024-07-13 08:21:02.154092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.584 [2024-07-13 08:21:02.154105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.584 [2024-07-13 08:21:02.154136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.584 qpair failed and we were unable to recover it. 00:34:10.584 [2024-07-13 08:21:02.163891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.584 [2024-07-13 08:21:02.164019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.584 [2024-07-13 08:21:02.164046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.584 [2024-07-13 08:21:02.164061] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.584 [2024-07-13 08:21:02.164074] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.584 [2024-07-13 08:21:02.164104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.584 qpair failed and we were unable to recover it. 00:34:10.584 [2024-07-13 08:21:02.174008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.584 [2024-07-13 08:21:02.174136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.584 [2024-07-13 08:21:02.174164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.584 [2024-07-13 08:21:02.174182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.584 [2024-07-13 08:21:02.174196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.584 [2024-07-13 08:21:02.174226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.584 qpair failed and we were unable to recover it. 00:34:10.584 [2024-07-13 08:21:02.183899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.584 [2024-07-13 08:21:02.184021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.584 [2024-07-13 08:21:02.184049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.584 [2024-07-13 08:21:02.184064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.584 [2024-07-13 08:21:02.184079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.584 [2024-07-13 08:21:02.184110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.584 qpair failed and we were unable to recover it. 00:34:10.584 [2024-07-13 08:21:02.193936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.584 [2024-07-13 08:21:02.194089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.584 [2024-07-13 08:21:02.194117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.584 [2024-07-13 08:21:02.194140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.584 [2024-07-13 08:21:02.194154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.584 [2024-07-13 08:21:02.194186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.584 qpair failed and we were unable to recover it. 00:34:10.584 [2024-07-13 08:21:02.203993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.584 [2024-07-13 08:21:02.204125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.584 [2024-07-13 08:21:02.204151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.584 [2024-07-13 08:21:02.204167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.584 [2024-07-13 08:21:02.204180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.584 [2024-07-13 08:21:02.204211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.584 qpair failed and we were unable to recover it. 00:34:10.584 [2024-07-13 08:21:02.214023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.584 [2024-07-13 08:21:02.214159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.584 [2024-07-13 08:21:02.214186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.584 [2024-07-13 08:21:02.214201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.584 [2024-07-13 08:21:02.214214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.584 [2024-07-13 08:21:02.214246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.584 qpair failed and we were unable to recover it. 00:34:10.584 [2024-07-13 08:21:02.224023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.584 [2024-07-13 08:21:02.224150] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.584 [2024-07-13 08:21:02.224176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.584 [2024-07-13 08:21:02.224191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.584 [2024-07-13 08:21:02.224204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.584 [2024-07-13 08:21:02.224236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.584 qpair failed and we were unable to recover it. 00:34:10.584 [2024-07-13 08:21:02.234058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.584 [2024-07-13 08:21:02.234178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.584 [2024-07-13 08:21:02.234205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.584 [2024-07-13 08:21:02.234221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.584 [2024-07-13 08:21:02.234235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.584 [2024-07-13 08:21:02.234279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.584 qpair failed and we were unable to recover it. 00:34:10.584 [2024-07-13 08:21:02.244089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.584 [2024-07-13 08:21:02.244214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.584 [2024-07-13 08:21:02.244241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.584 [2024-07-13 08:21:02.244256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.584 [2024-07-13 08:21:02.244269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.584 [2024-07-13 08:21:02.244301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.584 qpair failed and we were unable to recover it. 00:34:10.584 [2024-07-13 08:21:02.254151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.584 [2024-07-13 08:21:02.254281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.584 [2024-07-13 08:21:02.254308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.584 [2024-07-13 08:21:02.254327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.584 [2024-07-13 08:21:02.254341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.585 [2024-07-13 08:21:02.254388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.585 qpair failed and we were unable to recover it. 00:34:10.585 [2024-07-13 08:21:02.264155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.585 [2024-07-13 08:21:02.264273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.585 [2024-07-13 08:21:02.264300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.585 [2024-07-13 08:21:02.264316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.585 [2024-07-13 08:21:02.264331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.585 [2024-07-13 08:21:02.264362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.585 qpair failed and we were unable to recover it. 00:34:10.585 [2024-07-13 08:21:02.274173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.585 [2024-07-13 08:21:02.274296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.585 [2024-07-13 08:21:02.274322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.585 [2024-07-13 08:21:02.274337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.585 [2024-07-13 08:21:02.274350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.585 [2024-07-13 08:21:02.274394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.585 qpair failed and we were unable to recover it. 00:34:10.585 [2024-07-13 08:21:02.284223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.585 [2024-07-13 08:21:02.284347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.585 [2024-07-13 08:21:02.284378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.585 [2024-07-13 08:21:02.284395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.585 [2024-07-13 08:21:02.284408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.585 [2024-07-13 08:21:02.284439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.585 qpair failed and we were unable to recover it. 00:34:10.585 [2024-07-13 08:21:02.294251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.585 [2024-07-13 08:21:02.294420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.585 [2024-07-13 08:21:02.294447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.585 [2024-07-13 08:21:02.294463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.585 [2024-07-13 08:21:02.294476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.585 [2024-07-13 08:21:02.294519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.585 qpair failed and we were unable to recover it. 00:34:10.585 [2024-07-13 08:21:02.304245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.585 [2024-07-13 08:21:02.304368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.585 [2024-07-13 08:21:02.304394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.585 [2024-07-13 08:21:02.304410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.585 [2024-07-13 08:21:02.304425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.585 [2024-07-13 08:21:02.304456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.585 qpair failed and we were unable to recover it. 00:34:10.585 [2024-07-13 08:21:02.314284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.585 [2024-07-13 08:21:02.314421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.585 [2024-07-13 08:21:02.314447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.585 [2024-07-13 08:21:02.314463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.585 [2024-07-13 08:21:02.314478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.585 [2024-07-13 08:21:02.314509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.585 qpair failed and we were unable to recover it. 00:34:10.847 [2024-07-13 08:21:02.324375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.847 [2024-07-13 08:21:02.324505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.847 [2024-07-13 08:21:02.324532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.847 [2024-07-13 08:21:02.324548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.847 [2024-07-13 08:21:02.324562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.847 [2024-07-13 08:21:02.324612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.847 qpair failed and we were unable to recover it. 00:34:10.847 [2024-07-13 08:21:02.334342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.847 [2024-07-13 08:21:02.334468] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.847 [2024-07-13 08:21:02.334495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.847 [2024-07-13 08:21:02.334510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.847 [2024-07-13 08:21:02.334524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.847 [2024-07-13 08:21:02.334553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.847 qpair failed and we were unable to recover it. 00:34:10.847 [2024-07-13 08:21:02.344493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.847 [2024-07-13 08:21:02.344630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.847 [2024-07-13 08:21:02.344657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.847 [2024-07-13 08:21:02.344673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.847 [2024-07-13 08:21:02.344686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.847 [2024-07-13 08:21:02.344717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.847 qpair failed and we were unable to recover it. 00:34:10.847 [2024-07-13 08:21:02.354422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.847 [2024-07-13 08:21:02.354549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.847 [2024-07-13 08:21:02.354579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.847 [2024-07-13 08:21:02.354598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.847 [2024-07-13 08:21:02.354611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.847 [2024-07-13 08:21:02.354659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.847 qpair failed and we were unable to recover it. 00:34:10.847 [2024-07-13 08:21:02.364564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.847 [2024-07-13 08:21:02.364696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.847 [2024-07-13 08:21:02.364723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.847 [2024-07-13 08:21:02.364739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.847 [2024-07-13 08:21:02.364753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.847 [2024-07-13 08:21:02.364799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.847 qpair failed and we were unable to recover it. 00:34:10.847 [2024-07-13 08:21:02.374467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.847 [2024-07-13 08:21:02.374602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.847 [2024-07-13 08:21:02.374635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.847 [2024-07-13 08:21:02.374655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.847 [2024-07-13 08:21:02.374671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.847 [2024-07-13 08:21:02.374717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.847 qpair failed and we were unable to recover it. 00:34:10.847 [2024-07-13 08:21:02.384516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.847 [2024-07-13 08:21:02.384643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.847 [2024-07-13 08:21:02.384670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.847 [2024-07-13 08:21:02.384686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.847 [2024-07-13 08:21:02.384699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.847 [2024-07-13 08:21:02.384729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.847 qpair failed and we were unable to recover it. 00:34:10.847 [2024-07-13 08:21:02.394531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.847 [2024-07-13 08:21:02.394654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.847 [2024-07-13 08:21:02.394681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.847 [2024-07-13 08:21:02.394697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.847 [2024-07-13 08:21:02.394711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.847 [2024-07-13 08:21:02.394741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.847 qpair failed and we were unable to recover it. 00:34:10.848 [2024-07-13 08:21:02.404624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.848 [2024-07-13 08:21:02.404750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.848 [2024-07-13 08:21:02.404778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.848 [2024-07-13 08:21:02.404793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.848 [2024-07-13 08:21:02.404807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.848 [2024-07-13 08:21:02.404836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.848 qpair failed and we were unable to recover it. 00:34:10.848 [2024-07-13 08:21:02.414567] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.848 [2024-07-13 08:21:02.414695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.848 [2024-07-13 08:21:02.414721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.848 [2024-07-13 08:21:02.414737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.848 [2024-07-13 08:21:02.414751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.848 [2024-07-13 08:21:02.414786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.848 qpair failed and we were unable to recover it. 00:34:10.848 [2024-07-13 08:21:02.424584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.848 [2024-07-13 08:21:02.424717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.848 [2024-07-13 08:21:02.424744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.848 [2024-07-13 08:21:02.424761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.848 [2024-07-13 08:21:02.424775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.848 [2024-07-13 08:21:02.424819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.848 qpair failed and we were unable to recover it. 00:34:10.848 [2024-07-13 08:21:02.434605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.848 [2024-07-13 08:21:02.434730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.848 [2024-07-13 08:21:02.434756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.848 [2024-07-13 08:21:02.434771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.848 [2024-07-13 08:21:02.434785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.848 [2024-07-13 08:21:02.434814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.848 qpair failed and we were unable to recover it. 00:34:10.848 [2024-07-13 08:21:02.444656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.848 [2024-07-13 08:21:02.444852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.848 [2024-07-13 08:21:02.444887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.848 [2024-07-13 08:21:02.444903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.848 [2024-07-13 08:21:02.444917] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.848 [2024-07-13 08:21:02.444960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.848 qpair failed and we were unable to recover it. 00:34:10.848 [2024-07-13 08:21:02.454702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.848 [2024-07-13 08:21:02.454830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.848 [2024-07-13 08:21:02.454857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.848 [2024-07-13 08:21:02.454880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.848 [2024-07-13 08:21:02.454895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.848 [2024-07-13 08:21:02.454925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.848 qpair failed and we were unable to recover it. 00:34:10.848 [2024-07-13 08:21:02.464693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.848 [2024-07-13 08:21:02.464823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.848 [2024-07-13 08:21:02.464850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.848 [2024-07-13 08:21:02.464874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.848 [2024-07-13 08:21:02.464890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.848 [2024-07-13 08:21:02.464920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.848 qpair failed and we were unable to recover it. 00:34:10.848 [2024-07-13 08:21:02.474740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.848 [2024-07-13 08:21:02.474893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.848 [2024-07-13 08:21:02.474921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.848 [2024-07-13 08:21:02.474936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.848 [2024-07-13 08:21:02.474950] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.848 [2024-07-13 08:21:02.474980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.848 qpair failed and we were unable to recover it. 00:34:10.848 [2024-07-13 08:21:02.484787] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.848 [2024-07-13 08:21:02.484923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.848 [2024-07-13 08:21:02.484950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.848 [2024-07-13 08:21:02.484966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.848 [2024-07-13 08:21:02.484979] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.848 [2024-07-13 08:21:02.485009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.848 qpair failed and we were unable to recover it. 00:34:10.848 [2024-07-13 08:21:02.494810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.848 [2024-07-13 08:21:02.494986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.848 [2024-07-13 08:21:02.495013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.848 [2024-07-13 08:21:02.495029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.848 [2024-07-13 08:21:02.495042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.848 [2024-07-13 08:21:02.495073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.848 qpair failed and we were unable to recover it. 00:34:10.848 [2024-07-13 08:21:02.504801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.848 [2024-07-13 08:21:02.504939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.848 [2024-07-13 08:21:02.504967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.848 [2024-07-13 08:21:02.504985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.848 [2024-07-13 08:21:02.505005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.848 [2024-07-13 08:21:02.505037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.848 qpair failed and we were unable to recover it. 00:34:10.848 [2024-07-13 08:21:02.514874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.848 [2024-07-13 08:21:02.514994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.848 [2024-07-13 08:21:02.515023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.848 [2024-07-13 08:21:02.515039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.848 [2024-07-13 08:21:02.515053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.848 [2024-07-13 08:21:02.515096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.848 qpair failed and we were unable to recover it. 00:34:10.848 [2024-07-13 08:21:02.524895] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.848 [2024-07-13 08:21:02.525021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.848 [2024-07-13 08:21:02.525048] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.848 [2024-07-13 08:21:02.525063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.848 [2024-07-13 08:21:02.525077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.848 [2024-07-13 08:21:02.525108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.848 qpair failed and we were unable to recover it. 00:34:10.848 [2024-07-13 08:21:02.534907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.848 [2024-07-13 08:21:02.535026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.848 [2024-07-13 08:21:02.535054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.848 [2024-07-13 08:21:02.535070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.848 [2024-07-13 08:21:02.535083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.848 [2024-07-13 08:21:02.535113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.848 qpair failed and we were unable to recover it. 00:34:10.848 [2024-07-13 08:21:02.544996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.849 [2024-07-13 08:21:02.545167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.849 [2024-07-13 08:21:02.545194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.849 [2024-07-13 08:21:02.545209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.849 [2024-07-13 08:21:02.545223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.849 [2024-07-13 08:21:02.545252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.849 qpair failed and we were unable to recover it. 00:34:10.849 [2024-07-13 08:21:02.554941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.849 [2024-07-13 08:21:02.555067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.849 [2024-07-13 08:21:02.555095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.849 [2024-07-13 08:21:02.555111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.849 [2024-07-13 08:21:02.555124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.849 [2024-07-13 08:21:02.555154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.849 qpair failed and we were unable to recover it. 00:34:10.849 [2024-07-13 08:21:02.564992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.849 [2024-07-13 08:21:02.565120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.849 [2024-07-13 08:21:02.565147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.849 [2024-07-13 08:21:02.565163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.849 [2024-07-13 08:21:02.565176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.849 [2024-07-13 08:21:02.565206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.849 qpair failed and we were unable to recover it. 00:34:10.849 [2024-07-13 08:21:02.574993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:10.849 [2024-07-13 08:21:02.575120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:10.849 [2024-07-13 08:21:02.575148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:10.849 [2024-07-13 08:21:02.575164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:10.849 [2024-07-13 08:21:02.575177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:10.849 [2024-07-13 08:21:02.575207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:10.849 qpair failed and we were unable to recover it. 00:34:11.109 [2024-07-13 08:21:02.585003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.109 [2024-07-13 08:21:02.585127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.109 [2024-07-13 08:21:02.585153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.109 [2024-07-13 08:21:02.585167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.109 [2024-07-13 08:21:02.585180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.109 [2024-07-13 08:21:02.585210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.109 qpair failed and we were unable to recover it. 00:34:11.109 [2024-07-13 08:21:02.595098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.109 [2024-07-13 08:21:02.595227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.109 [2024-07-13 08:21:02.595256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.109 [2024-07-13 08:21:02.595277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.109 [2024-07-13 08:21:02.595307] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.109 [2024-07-13 08:21:02.595336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.109 qpair failed and we were unable to recover it. 00:34:11.109 [2024-07-13 08:21:02.605182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.109 [2024-07-13 08:21:02.605310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.109 [2024-07-13 08:21:02.605338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.109 [2024-07-13 08:21:02.605356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.109 [2024-07-13 08:21:02.605370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.109 [2024-07-13 08:21:02.605416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.109 qpair failed and we were unable to recover it. 00:34:11.109 [2024-07-13 08:21:02.615104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.109 [2024-07-13 08:21:02.615225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.109 [2024-07-13 08:21:02.615252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.109 [2024-07-13 08:21:02.615268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.109 [2024-07-13 08:21:02.615282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.109 [2024-07-13 08:21:02.615312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.109 qpair failed and we were unable to recover it. 00:34:11.109 [2024-07-13 08:21:02.625173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.109 [2024-07-13 08:21:02.625311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.109 [2024-07-13 08:21:02.625341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.109 [2024-07-13 08:21:02.625373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.109 [2024-07-13 08:21:02.625387] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.109 [2024-07-13 08:21:02.625418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.109 qpair failed and we were unable to recover it. 00:34:11.109 [2024-07-13 08:21:02.635156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.109 [2024-07-13 08:21:02.635278] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.109 [2024-07-13 08:21:02.635305] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.109 [2024-07-13 08:21:02.635320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.109 [2024-07-13 08:21:02.635334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.109 [2024-07-13 08:21:02.635364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.109 qpair failed and we were unable to recover it. 00:34:11.109 [2024-07-13 08:21:02.645216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.110 [2024-07-13 08:21:02.645378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.110 [2024-07-13 08:21:02.645406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.110 [2024-07-13 08:21:02.645421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.110 [2024-07-13 08:21:02.645449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.110 [2024-07-13 08:21:02.645479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.110 qpair failed and we were unable to recover it. 00:34:11.110 [2024-07-13 08:21:02.655225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.110 [2024-07-13 08:21:02.655349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.110 [2024-07-13 08:21:02.655377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.110 [2024-07-13 08:21:02.655393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.110 [2024-07-13 08:21:02.655406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.110 [2024-07-13 08:21:02.655448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.110 qpair failed and we were unable to recover it. 00:34:11.110 [2024-07-13 08:21:02.665242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.110 [2024-07-13 08:21:02.665367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.110 [2024-07-13 08:21:02.665394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.110 [2024-07-13 08:21:02.665409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.110 [2024-07-13 08:21:02.665423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.110 [2024-07-13 08:21:02.665454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.110 qpair failed and we were unable to recover it. 00:34:11.110 [2024-07-13 08:21:02.675306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.110 [2024-07-13 08:21:02.675425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.110 [2024-07-13 08:21:02.675452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.110 [2024-07-13 08:21:02.675468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.110 [2024-07-13 08:21:02.675481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.110 [2024-07-13 08:21:02.675512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.110 qpair failed and we were unable to recover it. 00:34:11.110 [2024-07-13 08:21:02.685373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.110 [2024-07-13 08:21:02.685536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.110 [2024-07-13 08:21:02.685564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.110 [2024-07-13 08:21:02.685585] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.110 [2024-07-13 08:21:02.685615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.110 [2024-07-13 08:21:02.685645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.110 qpair failed and we were unable to recover it. 00:34:11.110 [2024-07-13 08:21:02.695356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.110 [2024-07-13 08:21:02.695497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.110 [2024-07-13 08:21:02.695524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.110 [2024-07-13 08:21:02.695544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.110 [2024-07-13 08:21:02.695559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.110 [2024-07-13 08:21:02.695605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.110 qpair failed and we were unable to recover it. 00:34:11.110 [2024-07-13 08:21:02.705385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.110 [2024-07-13 08:21:02.705559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.110 [2024-07-13 08:21:02.705585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.110 [2024-07-13 08:21:02.705617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.110 [2024-07-13 08:21:02.705631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.110 [2024-07-13 08:21:02.705677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.110 qpair failed and we were unable to recover it. 00:34:11.110 [2024-07-13 08:21:02.715407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.110 [2024-07-13 08:21:02.715530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.110 [2024-07-13 08:21:02.715557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.110 [2024-07-13 08:21:02.715572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.110 [2024-07-13 08:21:02.715587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.110 [2024-07-13 08:21:02.715617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.110 qpair failed and we were unable to recover it. 00:34:11.110 [2024-07-13 08:21:02.725423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.110 [2024-07-13 08:21:02.725556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.110 [2024-07-13 08:21:02.725583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.110 [2024-07-13 08:21:02.725598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.110 [2024-07-13 08:21:02.725613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.110 [2024-07-13 08:21:02.725643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.110 qpair failed and we were unable to recover it. 00:34:11.110 [2024-07-13 08:21:02.735465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.110 [2024-07-13 08:21:02.735589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.110 [2024-07-13 08:21:02.735615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.110 [2024-07-13 08:21:02.735630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.110 [2024-07-13 08:21:02.735645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.110 [2024-07-13 08:21:02.735675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.110 qpair failed and we were unable to recover it. 00:34:11.110 [2024-07-13 08:21:02.745491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.110 [2024-07-13 08:21:02.745626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.110 [2024-07-13 08:21:02.745655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.110 [2024-07-13 08:21:02.745674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.110 [2024-07-13 08:21:02.745687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.110 [2024-07-13 08:21:02.745733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.110 qpair failed and we were unable to recover it. 00:34:11.110 [2024-07-13 08:21:02.755528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.110 [2024-07-13 08:21:02.755699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.110 [2024-07-13 08:21:02.755728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.110 [2024-07-13 08:21:02.755745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.110 [2024-07-13 08:21:02.755773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.110 [2024-07-13 08:21:02.755803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.110 qpair failed and we were unable to recover it. 00:34:11.110 [2024-07-13 08:21:02.765552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.110 [2024-07-13 08:21:02.765675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.110 [2024-07-13 08:21:02.765702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.110 [2024-07-13 08:21:02.765719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.110 [2024-07-13 08:21:02.765732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.110 [2024-07-13 08:21:02.765776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.110 qpair failed and we were unable to recover it. 00:34:11.110 [2024-07-13 08:21:02.775661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.110 [2024-07-13 08:21:02.775801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.110 [2024-07-13 08:21:02.775834] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.110 [2024-07-13 08:21:02.775850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.110 [2024-07-13 08:21:02.775863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.110 [2024-07-13 08:21:02.775903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.110 qpair failed and we were unable to recover it. 00:34:11.110 [2024-07-13 08:21:02.785613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.110 [2024-07-13 08:21:02.785743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.110 [2024-07-13 08:21:02.785774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.111 [2024-07-13 08:21:02.785793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.111 [2024-07-13 08:21:02.785807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.111 [2024-07-13 08:21:02.785871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.111 qpair failed and we were unable to recover it. 00:34:11.111 [2024-07-13 08:21:02.795671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.111 [2024-07-13 08:21:02.795815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.111 [2024-07-13 08:21:02.795843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.111 [2024-07-13 08:21:02.795859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.111 [2024-07-13 08:21:02.795881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.111 [2024-07-13 08:21:02.795916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.111 qpair failed and we were unable to recover it. 00:34:11.111 [2024-07-13 08:21:02.805664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.111 [2024-07-13 08:21:02.805793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.111 [2024-07-13 08:21:02.805820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.111 [2024-07-13 08:21:02.805835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.111 [2024-07-13 08:21:02.805848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.111 [2024-07-13 08:21:02.805888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.111 qpair failed and we were unable to recover it. 00:34:11.111 [2024-07-13 08:21:02.815750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.111 [2024-07-13 08:21:02.815893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.111 [2024-07-13 08:21:02.815927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.111 [2024-07-13 08:21:02.815946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.111 [2024-07-13 08:21:02.815960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.111 [2024-07-13 08:21:02.815999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.111 qpair failed and we were unable to recover it. 00:34:11.111 [2024-07-13 08:21:02.825705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.111 [2024-07-13 08:21:02.825834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.111 [2024-07-13 08:21:02.825873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.111 [2024-07-13 08:21:02.825891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.111 [2024-07-13 08:21:02.825906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.111 [2024-07-13 08:21:02.825936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.111 qpair failed and we were unable to recover it. 00:34:11.111 [2024-07-13 08:21:02.835803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.111 [2024-07-13 08:21:02.835930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.111 [2024-07-13 08:21:02.835957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.111 [2024-07-13 08:21:02.835973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.111 [2024-07-13 08:21:02.835986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.111 [2024-07-13 08:21:02.836017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.111 qpair failed and we were unable to recover it. 00:34:11.368 [2024-07-13 08:21:02.845768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.368 [2024-07-13 08:21:02.845914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.368 [2024-07-13 08:21:02.845941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.368 [2024-07-13 08:21:02.845956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.368 [2024-07-13 08:21:02.845978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.368 [2024-07-13 08:21:02.846008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.368 qpair failed and we were unable to recover it. 00:34:11.368 [2024-07-13 08:21:02.855794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.368 [2024-07-13 08:21:02.855931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.368 [2024-07-13 08:21:02.855958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.368 [2024-07-13 08:21:02.855973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.368 [2024-07-13 08:21:02.855987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.368 [2024-07-13 08:21:02.856017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.368 qpair failed and we were unable to recover it. 00:34:11.368 [2024-07-13 08:21:02.865856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.368 [2024-07-13 08:21:02.865986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.368 [2024-07-13 08:21:02.866020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.368 [2024-07-13 08:21:02.866036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.368 [2024-07-13 08:21:02.866050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.368 [2024-07-13 08:21:02.866081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.368 qpair failed and we were unable to recover it. 00:34:11.368 [2024-07-13 08:21:02.875931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.368 [2024-07-13 08:21:02.876057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.368 [2024-07-13 08:21:02.876083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.368 [2024-07-13 08:21:02.876101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.368 [2024-07-13 08:21:02.876115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.368 [2024-07-13 08:21:02.876146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.368 qpair failed and we were unable to recover it. 00:34:11.368 [2024-07-13 08:21:02.885891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.368 [2024-07-13 08:21:02.886026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.368 [2024-07-13 08:21:02.886052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.368 [2024-07-13 08:21:02.886067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.368 [2024-07-13 08:21:02.886082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.368 [2024-07-13 08:21:02.886112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.368 qpair failed and we were unable to recover it. 00:34:11.368 [2024-07-13 08:21:02.895920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.368 [2024-07-13 08:21:02.896051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.368 [2024-07-13 08:21:02.896077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.368 [2024-07-13 08:21:02.896093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.368 [2024-07-13 08:21:02.896107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.368 [2024-07-13 08:21:02.896137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.368 qpair failed and we were unable to recover it. 00:34:11.368 [2024-07-13 08:21:02.905946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.368 [2024-07-13 08:21:02.906068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.368 [2024-07-13 08:21:02.906093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.368 [2024-07-13 08:21:02.906109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.368 [2024-07-13 08:21:02.906128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.369 [2024-07-13 08:21:02.906163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.369 qpair failed and we were unable to recover it. 00:34:11.369 [2024-07-13 08:21:02.915983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.369 [2024-07-13 08:21:02.916112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.369 [2024-07-13 08:21:02.916139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.369 [2024-07-13 08:21:02.916158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.369 [2024-07-13 08:21:02.916171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.369 [2024-07-13 08:21:02.916216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.369 qpair failed and we were unable to recover it. 00:34:11.369 [2024-07-13 08:21:02.926020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.369 [2024-07-13 08:21:02.926158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.369 [2024-07-13 08:21:02.926185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.369 [2024-07-13 08:21:02.926205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.369 [2024-07-13 08:21:02.926219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.369 [2024-07-13 08:21:02.926264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.369 qpair failed and we were unable to recover it. 00:34:11.369 [2024-07-13 08:21:02.936044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.369 [2024-07-13 08:21:02.936176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.369 [2024-07-13 08:21:02.936203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.369 [2024-07-13 08:21:02.936222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.369 [2024-07-13 08:21:02.936236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.369 [2024-07-13 08:21:02.936282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.369 qpair failed and we were unable to recover it. 00:34:11.369 [2024-07-13 08:21:02.946075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.369 [2024-07-13 08:21:02.946202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.369 [2024-07-13 08:21:02.946228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.369 [2024-07-13 08:21:02.946243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.369 [2024-07-13 08:21:02.946257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.369 [2024-07-13 08:21:02.946287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.369 qpair failed and we were unable to recover it. 00:34:11.369 [2024-07-13 08:21:02.956089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.369 [2024-07-13 08:21:02.956226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.369 [2024-07-13 08:21:02.956252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.369 [2024-07-13 08:21:02.956268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.369 [2024-07-13 08:21:02.956281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.369 [2024-07-13 08:21:02.956311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.369 qpair failed and we were unable to recover it. 00:34:11.369 [2024-07-13 08:21:02.966130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.369 [2024-07-13 08:21:02.966259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.369 [2024-07-13 08:21:02.966285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.369 [2024-07-13 08:21:02.966299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.369 [2024-07-13 08:21:02.966314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.369 [2024-07-13 08:21:02.966344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.369 qpair failed and we were unable to recover it. 00:34:11.369 [2024-07-13 08:21:02.976155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.369 [2024-07-13 08:21:02.976292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.369 [2024-07-13 08:21:02.976318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.369 [2024-07-13 08:21:02.976334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.369 [2024-07-13 08:21:02.976347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.369 [2024-07-13 08:21:02.976390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.369 qpair failed and we were unable to recover it. 00:34:11.369 [2024-07-13 08:21:02.986157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.369 [2024-07-13 08:21:02.986284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.369 [2024-07-13 08:21:02.986310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.369 [2024-07-13 08:21:02.986326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.369 [2024-07-13 08:21:02.986340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.369 [2024-07-13 08:21:02.986370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.369 qpair failed and we were unable to recover it. 00:34:11.369 [2024-07-13 08:21:02.996277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.369 [2024-07-13 08:21:02.996397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.369 [2024-07-13 08:21:02.996423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.369 [2024-07-13 08:21:02.996445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.369 [2024-07-13 08:21:02.996459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.369 [2024-07-13 08:21:02.996490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.369 qpair failed and we were unable to recover it. 00:34:11.369 [2024-07-13 08:21:03.006234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.369 [2024-07-13 08:21:03.006371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.369 [2024-07-13 08:21:03.006397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.369 [2024-07-13 08:21:03.006412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.369 [2024-07-13 08:21:03.006427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.369 [2024-07-13 08:21:03.006457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.369 qpair failed and we were unable to recover it. 00:34:11.369 [2024-07-13 08:21:03.016272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.369 [2024-07-13 08:21:03.016398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.369 [2024-07-13 08:21:03.016425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.369 [2024-07-13 08:21:03.016440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.369 [2024-07-13 08:21:03.016455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.369 [2024-07-13 08:21:03.016485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.369 qpair failed and we were unable to recover it. 00:34:11.369 [2024-07-13 08:21:03.026330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.369 [2024-07-13 08:21:03.026508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.369 [2024-07-13 08:21:03.026534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.369 [2024-07-13 08:21:03.026565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.369 [2024-07-13 08:21:03.026579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.369 [2024-07-13 08:21:03.026624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.369 qpair failed and we were unable to recover it. 00:34:11.369 [2024-07-13 08:21:03.036353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.369 [2024-07-13 08:21:03.036491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.369 [2024-07-13 08:21:03.036519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.369 [2024-07-13 08:21:03.036539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.369 [2024-07-13 08:21:03.036553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.369 [2024-07-13 08:21:03.036600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.369 qpair failed and we were unable to recover it. 00:34:11.369 [2024-07-13 08:21:03.046324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.369 [2024-07-13 08:21:03.046476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.369 [2024-07-13 08:21:03.046502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.369 [2024-07-13 08:21:03.046517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.369 [2024-07-13 08:21:03.046531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.369 [2024-07-13 08:21:03.046573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.369 qpair failed and we were unable to recover it. 00:34:11.369 [2024-07-13 08:21:03.056375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.369 [2024-07-13 08:21:03.056512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.369 [2024-07-13 08:21:03.056538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.369 [2024-07-13 08:21:03.056553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.369 [2024-07-13 08:21:03.056567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.369 [2024-07-13 08:21:03.056611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.369 qpair failed and we were unable to recover it. 00:34:11.369 [2024-07-13 08:21:03.066393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.369 [2024-07-13 08:21:03.066519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.369 [2024-07-13 08:21:03.066547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.369 [2024-07-13 08:21:03.066562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.369 [2024-07-13 08:21:03.066576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.369 [2024-07-13 08:21:03.066607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.369 qpair failed and we were unable to recover it. 00:34:11.369 [2024-07-13 08:21:03.076453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.369 [2024-07-13 08:21:03.076582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.369 [2024-07-13 08:21:03.076612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.369 [2024-07-13 08:21:03.076629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.369 [2024-07-13 08:21:03.076643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.369 [2024-07-13 08:21:03.076687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.369 qpair failed and we were unable to recover it. 00:34:11.369 [2024-07-13 08:21:03.086538] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.369 [2024-07-13 08:21:03.086676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.369 [2024-07-13 08:21:03.086703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.369 [2024-07-13 08:21:03.086724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.369 [2024-07-13 08:21:03.086740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.369 [2024-07-13 08:21:03.086773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.369 qpair failed and we were unable to recover it. 00:34:11.369 [2024-07-13 08:21:03.096496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.369 [2024-07-13 08:21:03.096630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.369 [2024-07-13 08:21:03.096657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.369 [2024-07-13 08:21:03.096673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.369 [2024-07-13 08:21:03.096687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.369 [2024-07-13 08:21:03.096731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.369 qpair failed and we were unable to recover it. 00:34:11.627 [2024-07-13 08:21:03.106515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.627 [2024-07-13 08:21:03.106674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.627 [2024-07-13 08:21:03.106701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.627 [2024-07-13 08:21:03.106716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.627 [2024-07-13 08:21:03.106745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.627 [2024-07-13 08:21:03.106775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.627 qpair failed and we were unable to recover it. 00:34:11.627 [2024-07-13 08:21:03.116593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.627 [2024-07-13 08:21:03.116723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.627 [2024-07-13 08:21:03.116749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.627 [2024-07-13 08:21:03.116764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.627 [2024-07-13 08:21:03.116779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.627 [2024-07-13 08:21:03.116809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.627 qpair failed and we were unable to recover it. 00:34:11.627 [2024-07-13 08:21:03.126577] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.627 [2024-07-13 08:21:03.126704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.627 [2024-07-13 08:21:03.126730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.627 [2024-07-13 08:21:03.126745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.627 [2024-07-13 08:21:03.126759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.627 [2024-07-13 08:21:03.126791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.627 qpair failed and we were unable to recover it. 00:34:11.627 [2024-07-13 08:21:03.136590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.627 [2024-07-13 08:21:03.136729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.627 [2024-07-13 08:21:03.136756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.627 [2024-07-13 08:21:03.136771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.627 [2024-07-13 08:21:03.136784] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.627 [2024-07-13 08:21:03.136815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.627 qpair failed and we were unable to recover it. 00:34:11.627 [2024-07-13 08:21:03.146619] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.627 [2024-07-13 08:21:03.146760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.627 [2024-07-13 08:21:03.146786] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.627 [2024-07-13 08:21:03.146801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.627 [2024-07-13 08:21:03.146815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.627 [2024-07-13 08:21:03.146845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.627 qpair failed and we were unable to recover it. 00:34:11.627 [2024-07-13 08:21:03.156657] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.627 [2024-07-13 08:21:03.156803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.627 [2024-07-13 08:21:03.156830] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.627 [2024-07-13 08:21:03.156845] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.627 [2024-07-13 08:21:03.156861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.627 [2024-07-13 08:21:03.156913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.627 qpair failed and we were unable to recover it. 00:34:11.627 [2024-07-13 08:21:03.166701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.627 [2024-07-13 08:21:03.166840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.627 [2024-07-13 08:21:03.166874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.627 [2024-07-13 08:21:03.166892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.627 [2024-07-13 08:21:03.166905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.627 [2024-07-13 08:21:03.166936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.627 qpair failed and we were unable to recover it. 00:34:11.627 [2024-07-13 08:21:03.176712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.627 [2024-07-13 08:21:03.176842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.627 [2024-07-13 08:21:03.176884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.627 [2024-07-13 08:21:03.176904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.627 [2024-07-13 08:21:03.176919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.627 [2024-07-13 08:21:03.176963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.627 qpair failed and we were unable to recover it. 00:34:11.627 [2024-07-13 08:21:03.186742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.627 [2024-07-13 08:21:03.186878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.627 [2024-07-13 08:21:03.186908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.627 [2024-07-13 08:21:03.186924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.627 [2024-07-13 08:21:03.186938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.627 [2024-07-13 08:21:03.186969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.627 qpair failed and we were unable to recover it. 00:34:11.627 [2024-07-13 08:21:03.196750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.627 [2024-07-13 08:21:03.196890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.627 [2024-07-13 08:21:03.196918] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.627 [2024-07-13 08:21:03.196934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.627 [2024-07-13 08:21:03.196947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.627 [2024-07-13 08:21:03.196979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.627 qpair failed and we were unable to recover it. 00:34:11.627 [2024-07-13 08:21:03.206796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.627 [2024-07-13 08:21:03.206936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.627 [2024-07-13 08:21:03.206962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.627 [2024-07-13 08:21:03.206978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.627 [2024-07-13 08:21:03.206992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.627 [2024-07-13 08:21:03.207022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.627 qpair failed and we were unable to recover it. 00:34:11.627 [2024-07-13 08:21:03.216802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.628 [2024-07-13 08:21:03.216937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.628 [2024-07-13 08:21:03.216964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.628 [2024-07-13 08:21:03.216979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.628 [2024-07-13 08:21:03.216993] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.628 [2024-07-13 08:21:03.217029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.628 qpair failed and we were unable to recover it. 00:34:11.628 [2024-07-13 08:21:03.226838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.628 [2024-07-13 08:21:03.226976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.628 [2024-07-13 08:21:03.227003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.628 [2024-07-13 08:21:03.227018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.628 [2024-07-13 08:21:03.227032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.628 [2024-07-13 08:21:03.227063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.628 qpair failed and we were unable to recover it. 00:34:11.628 [2024-07-13 08:21:03.236842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.628 [2024-07-13 08:21:03.236970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.628 [2024-07-13 08:21:03.236996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.628 [2024-07-13 08:21:03.237011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.628 [2024-07-13 08:21:03.237025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.628 [2024-07-13 08:21:03.237056] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.628 qpair failed and we were unable to recover it. 00:34:11.628 [2024-07-13 08:21:03.246931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.628 [2024-07-13 08:21:03.247078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.628 [2024-07-13 08:21:03.247104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.628 [2024-07-13 08:21:03.247120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.628 [2024-07-13 08:21:03.247133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.628 [2024-07-13 08:21:03.247163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.628 qpair failed and we were unable to recover it. 00:34:11.628 [2024-07-13 08:21:03.256925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.628 [2024-07-13 08:21:03.257073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.628 [2024-07-13 08:21:03.257099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.628 [2024-07-13 08:21:03.257114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.628 [2024-07-13 08:21:03.257128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.628 [2024-07-13 08:21:03.257158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.628 qpair failed and we were unable to recover it. 00:34:11.628 [2024-07-13 08:21:03.266945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.628 [2024-07-13 08:21:03.267074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.628 [2024-07-13 08:21:03.267105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.628 [2024-07-13 08:21:03.267121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.628 [2024-07-13 08:21:03.267135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.628 [2024-07-13 08:21:03.267165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.628 qpair failed and we were unable to recover it. 00:34:11.628 [2024-07-13 08:21:03.277005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.628 [2024-07-13 08:21:03.277144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.628 [2024-07-13 08:21:03.277170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.628 [2024-07-13 08:21:03.277185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.628 [2024-07-13 08:21:03.277198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.628 [2024-07-13 08:21:03.277243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.628 qpair failed and we were unable to recover it. 00:34:11.628 [2024-07-13 08:21:03.287044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.628 [2024-07-13 08:21:03.287207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.628 [2024-07-13 08:21:03.287232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.628 [2024-07-13 08:21:03.287248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.628 [2024-07-13 08:21:03.287276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.628 [2024-07-13 08:21:03.287306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.628 qpair failed and we were unable to recover it. 00:34:11.628 [2024-07-13 08:21:03.297063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.628 [2024-07-13 08:21:03.297196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.628 [2024-07-13 08:21:03.297222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.628 [2024-07-13 08:21:03.297237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.628 [2024-07-13 08:21:03.297250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.628 [2024-07-13 08:21:03.297280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.628 qpair failed and we were unable to recover it. 00:34:11.628 [2024-07-13 08:21:03.307064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.628 [2024-07-13 08:21:03.307188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.628 [2024-07-13 08:21:03.307214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.628 [2024-07-13 08:21:03.307229] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.628 [2024-07-13 08:21:03.307248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.628 [2024-07-13 08:21:03.307278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.628 qpair failed and we were unable to recover it. 00:34:11.628 [2024-07-13 08:21:03.317128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.628 [2024-07-13 08:21:03.317263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.628 [2024-07-13 08:21:03.317290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.628 [2024-07-13 08:21:03.317305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.628 [2024-07-13 08:21:03.317319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.628 [2024-07-13 08:21:03.317365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.628 qpair failed and we were unable to recover it. 00:34:11.628 [2024-07-13 08:21:03.327135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.628 [2024-07-13 08:21:03.327270] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.628 [2024-07-13 08:21:03.327296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.628 [2024-07-13 08:21:03.327310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.628 [2024-07-13 08:21:03.327324] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.628 [2024-07-13 08:21:03.327354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.628 qpair failed and we were unable to recover it. 00:34:11.628 [2024-07-13 08:21:03.337242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.628 [2024-07-13 08:21:03.337389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.628 [2024-07-13 08:21:03.337416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.628 [2024-07-13 08:21:03.337431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.628 [2024-07-13 08:21:03.337444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.628 [2024-07-13 08:21:03.337490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.628 qpair failed and we were unable to recover it. 00:34:11.628 [2024-07-13 08:21:03.347214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.628 [2024-07-13 08:21:03.347349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.628 [2024-07-13 08:21:03.347376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.628 [2024-07-13 08:21:03.347391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.628 [2024-07-13 08:21:03.347405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.628 [2024-07-13 08:21:03.347435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.628 qpair failed and we were unable to recover it. 00:34:11.628 [2024-07-13 08:21:03.357252] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.628 [2024-07-13 08:21:03.357387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.628 [2024-07-13 08:21:03.357414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.628 [2024-07-13 08:21:03.357429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.628 [2024-07-13 08:21:03.357444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.628 [2024-07-13 08:21:03.357475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.628 qpair failed and we were unable to recover it. 00:34:11.886 [2024-07-13 08:21:03.367304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.886 [2024-07-13 08:21:03.367457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.886 [2024-07-13 08:21:03.367495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.886 [2024-07-13 08:21:03.367509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.886 [2024-07-13 08:21:03.367523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.886 [2024-07-13 08:21:03.367569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.886 qpair failed and we were unable to recover it. 00:34:11.886 [2024-07-13 08:21:03.377301] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.886 [2024-07-13 08:21:03.377431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.886 [2024-07-13 08:21:03.377457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.886 [2024-07-13 08:21:03.377472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.886 [2024-07-13 08:21:03.377486] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.886 [2024-07-13 08:21:03.377516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.886 qpair failed and we were unable to recover it. 00:34:11.886 [2024-07-13 08:21:03.387367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.886 [2024-07-13 08:21:03.387504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.886 [2024-07-13 08:21:03.387531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.886 [2024-07-13 08:21:03.387546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.886 [2024-07-13 08:21:03.387560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.886 [2024-07-13 08:21:03.387604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.886 qpair failed and we were unable to recover it. 00:34:11.886 [2024-07-13 08:21:03.397352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.886 [2024-07-13 08:21:03.397497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.886 [2024-07-13 08:21:03.397523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.886 [2024-07-13 08:21:03.397538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.886 [2024-07-13 08:21:03.397558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.886 [2024-07-13 08:21:03.397590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.886 qpair failed and we were unable to recover it. 00:34:11.886 [2024-07-13 08:21:03.407440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.886 [2024-07-13 08:21:03.407600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.886 [2024-07-13 08:21:03.407626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.886 [2024-07-13 08:21:03.407640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.886 [2024-07-13 08:21:03.407655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.887 [2024-07-13 08:21:03.407685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.887 qpair failed and we were unable to recover it. 00:34:11.887 [2024-07-13 08:21:03.417471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.887 [2024-07-13 08:21:03.417614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.887 [2024-07-13 08:21:03.417639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.887 [2024-07-13 08:21:03.417655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.887 [2024-07-13 08:21:03.417668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.887 [2024-07-13 08:21:03.417698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.887 qpair failed and we were unable to recover it. 00:34:11.887 [2024-07-13 08:21:03.427427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.887 [2024-07-13 08:21:03.427558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.887 [2024-07-13 08:21:03.427586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.887 [2024-07-13 08:21:03.427601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.887 [2024-07-13 08:21:03.427615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.887 [2024-07-13 08:21:03.427645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.887 qpair failed and we were unable to recover it. 00:34:11.887 [2024-07-13 08:21:03.437468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.887 [2024-07-13 08:21:03.437611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.887 [2024-07-13 08:21:03.437637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.887 [2024-07-13 08:21:03.437652] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.887 [2024-07-13 08:21:03.437665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.887 [2024-07-13 08:21:03.437695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.887 qpair failed and we were unable to recover it. 00:34:11.887 [2024-07-13 08:21:03.447534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.887 [2024-07-13 08:21:03.447667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.887 [2024-07-13 08:21:03.447693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.887 [2024-07-13 08:21:03.447708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.887 [2024-07-13 08:21:03.447722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.887 [2024-07-13 08:21:03.447766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.887 qpair failed and we were unable to recover it. 00:34:11.887 [2024-07-13 08:21:03.457500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.887 [2024-07-13 08:21:03.457619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.887 [2024-07-13 08:21:03.457645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.887 [2024-07-13 08:21:03.457660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.887 [2024-07-13 08:21:03.457674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.887 [2024-07-13 08:21:03.457704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.887 qpair failed and we were unable to recover it. 00:34:11.887 [2024-07-13 08:21:03.467535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.887 [2024-07-13 08:21:03.467677] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.887 [2024-07-13 08:21:03.467703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.887 [2024-07-13 08:21:03.467718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.887 [2024-07-13 08:21:03.467732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.887 [2024-07-13 08:21:03.467763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.887 qpair failed and we were unable to recover it. 00:34:11.887 [2024-07-13 08:21:03.477599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.887 [2024-07-13 08:21:03.477720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.887 [2024-07-13 08:21:03.477746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.887 [2024-07-13 08:21:03.477761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.887 [2024-07-13 08:21:03.477775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.887 [2024-07-13 08:21:03.477805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.887 qpair failed and we were unable to recover it. 00:34:11.887 [2024-07-13 08:21:03.487626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.887 [2024-07-13 08:21:03.487759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.887 [2024-07-13 08:21:03.487785] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.887 [2024-07-13 08:21:03.487807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.887 [2024-07-13 08:21:03.487823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.887 [2024-07-13 08:21:03.487854] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.887 qpair failed and we were unable to recover it. 00:34:11.887 [2024-07-13 08:21:03.497641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.887 [2024-07-13 08:21:03.497774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.887 [2024-07-13 08:21:03.497800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.887 [2024-07-13 08:21:03.497816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.887 [2024-07-13 08:21:03.497830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.887 [2024-07-13 08:21:03.497860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.887 qpair failed and we were unable to recover it. 00:34:11.887 [2024-07-13 08:21:03.507673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.887 [2024-07-13 08:21:03.507800] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.887 [2024-07-13 08:21:03.507827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.887 [2024-07-13 08:21:03.507843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.887 [2024-07-13 08:21:03.507855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.887 [2024-07-13 08:21:03.507892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.887 qpair failed and we were unable to recover it. 00:34:11.887 [2024-07-13 08:21:03.517702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.887 [2024-07-13 08:21:03.517831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.887 [2024-07-13 08:21:03.517857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.887 [2024-07-13 08:21:03.517881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.887 [2024-07-13 08:21:03.517896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.887 [2024-07-13 08:21:03.517927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.887 qpair failed and we were unable to recover it. 00:34:11.887 [2024-07-13 08:21:03.527717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.887 [2024-07-13 08:21:03.527859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.887 [2024-07-13 08:21:03.527892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.887 [2024-07-13 08:21:03.527908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.887 [2024-07-13 08:21:03.527922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.887 [2024-07-13 08:21:03.527966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.887 qpair failed and we were unable to recover it. 00:34:11.887 [2024-07-13 08:21:03.537760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.887 [2024-07-13 08:21:03.537892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.887 [2024-07-13 08:21:03.537919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.887 [2024-07-13 08:21:03.537934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.887 [2024-07-13 08:21:03.537958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.887 [2024-07-13 08:21:03.537989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.887 qpair failed and we were unable to recover it. 00:34:11.887 [2024-07-13 08:21:03.547762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.887 [2024-07-13 08:21:03.547895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.887 [2024-07-13 08:21:03.547922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.887 [2024-07-13 08:21:03.547937] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.887 [2024-07-13 08:21:03.547949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.887 [2024-07-13 08:21:03.547978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.887 qpair failed and we were unable to recover it. 00:34:11.887 [2024-07-13 08:21:03.557800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.887 [2024-07-13 08:21:03.557940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.887 [2024-07-13 08:21:03.557969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.887 [2024-07-13 08:21:03.557985] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.887 [2024-07-13 08:21:03.557999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.887 [2024-07-13 08:21:03.558029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.887 qpair failed and we were unable to recover it. 00:34:11.888 [2024-07-13 08:21:03.567831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.888 [2024-07-13 08:21:03.568007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.888 [2024-07-13 08:21:03.568033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.888 [2024-07-13 08:21:03.568056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.888 [2024-07-13 08:21:03.568069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.888 [2024-07-13 08:21:03.568100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.888 qpair failed and we were unable to recover it. 00:34:11.888 [2024-07-13 08:21:03.577854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.888 [2024-07-13 08:21:03.577997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.888 [2024-07-13 08:21:03.578028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.888 [2024-07-13 08:21:03.578044] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.888 [2024-07-13 08:21:03.578058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.888 [2024-07-13 08:21:03.578088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.888 qpair failed and we were unable to recover it. 00:34:11.888 [2024-07-13 08:21:03.587886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.888 [2024-07-13 08:21:03.588048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.888 [2024-07-13 08:21:03.588073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.888 [2024-07-13 08:21:03.588087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.888 [2024-07-13 08:21:03.588101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.888 [2024-07-13 08:21:03.588143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.888 qpair failed and we were unable to recover it. 00:34:11.888 [2024-07-13 08:21:03.597995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.888 [2024-07-13 08:21:03.598125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.888 [2024-07-13 08:21:03.598166] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.888 [2024-07-13 08:21:03.598182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.888 [2024-07-13 08:21:03.598195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.888 [2024-07-13 08:21:03.598225] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.888 qpair failed and we were unable to recover it. 00:34:11.888 [2024-07-13 08:21:03.607952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.888 [2024-07-13 08:21:03.608089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.888 [2024-07-13 08:21:03.608116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.888 [2024-07-13 08:21:03.608132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.888 [2024-07-13 08:21:03.608146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.888 [2024-07-13 08:21:03.608190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.888 qpair failed and we were unable to recover it. 00:34:11.888 [2024-07-13 08:21:03.617985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:11.888 [2024-07-13 08:21:03.618116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:11.888 [2024-07-13 08:21:03.618143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:11.888 [2024-07-13 08:21:03.618158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:11.888 [2024-07-13 08:21:03.618172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:11.888 [2024-07-13 08:21:03.618223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:11.888 qpair failed and we were unable to recover it. 00:34:12.146 [2024-07-13 08:21:03.628018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.147 [2024-07-13 08:21:03.628154] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.147 [2024-07-13 08:21:03.628181] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.147 [2024-07-13 08:21:03.628196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.147 [2024-07-13 08:21:03.628210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:12.147 [2024-07-13 08:21:03.628240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.147 qpair failed and we were unable to recover it. 00:34:12.147 [2024-07-13 08:21:03.638066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.147 [2024-07-13 08:21:03.638195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.147 [2024-07-13 08:21:03.638221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.147 [2024-07-13 08:21:03.638236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.147 [2024-07-13 08:21:03.638265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:12.147 [2024-07-13 08:21:03.638295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.147 qpair failed and we were unable to recover it. 00:34:12.147 [2024-07-13 08:21:03.648065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.147 [2024-07-13 08:21:03.648198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.147 [2024-07-13 08:21:03.648225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.147 [2024-07-13 08:21:03.648240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.147 [2024-07-13 08:21:03.648255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:12.147 [2024-07-13 08:21:03.648284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.147 qpair failed and we were unable to recover it. 00:34:12.147 [2024-07-13 08:21:03.658082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.147 [2024-07-13 08:21:03.658210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.147 [2024-07-13 08:21:03.658235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.147 [2024-07-13 08:21:03.658250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.147 [2024-07-13 08:21:03.658264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:12.147 [2024-07-13 08:21:03.658294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.147 qpair failed and we were unable to recover it. 00:34:12.147 [2024-07-13 08:21:03.668237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.147 [2024-07-13 08:21:03.668380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.147 [2024-07-13 08:21:03.668410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.147 [2024-07-13 08:21:03.668426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.147 [2024-07-13 08:21:03.668440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:12.147 [2024-07-13 08:21:03.668484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.147 qpair failed and we were unable to recover it. 00:34:12.147 [2024-07-13 08:21:03.678154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.147 [2024-07-13 08:21:03.678284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.147 [2024-07-13 08:21:03.678310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.147 [2024-07-13 08:21:03.678324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.147 [2024-07-13 08:21:03.678339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:12.147 [2024-07-13 08:21:03.678381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.147 qpair failed and we were unable to recover it. 00:34:12.147 [2024-07-13 08:21:03.688203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.147 [2024-07-13 08:21:03.688329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.147 [2024-07-13 08:21:03.688355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.147 [2024-07-13 08:21:03.688369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.147 [2024-07-13 08:21:03.688383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:12.147 [2024-07-13 08:21:03.688413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.147 qpair failed and we were unable to recover it. 00:34:12.147 [2024-07-13 08:21:03.698208] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.147 [2024-07-13 08:21:03.698338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.147 [2024-07-13 08:21:03.698363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.147 [2024-07-13 08:21:03.698378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.147 [2024-07-13 08:21:03.698392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fe4000b90 00:34:12.147 [2024-07-13 08:21:03.698422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:12.147 qpair failed and we were unable to recover it. 00:34:12.147 [2024-07-13 08:21:03.708245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.147 [2024-07-13 08:21:03.708395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.147 [2024-07-13 08:21:03.708426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.147 [2024-07-13 08:21:03.708442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.147 [2024-07-13 08:21:03.708461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xacd600 00:34:12.147 [2024-07-13 08:21:03.708506] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:12.147 qpair failed and we were unable to recover it. 00:34:12.147 [2024-07-13 08:21:03.718375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.147 [2024-07-13 08:21:03.718506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.147 [2024-07-13 08:21:03.718534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.147 [2024-07-13 08:21:03.718549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.147 [2024-07-13 08:21:03.718563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xacd600 00:34:12.147 [2024-07-13 08:21:03.718593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:12.147 qpair failed and we were unable to recover it. 00:34:12.147 [2024-07-13 08:21:03.728300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.147 [2024-07-13 08:21:03.728485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.147 [2024-07-13 08:21:03.728518] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.147 [2024-07-13 08:21:03.728535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.147 [2024-07-13 08:21:03.728549] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:12.147 [2024-07-13 08:21:03.728581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:12.147 qpair failed and we were unable to recover it. 00:34:12.147 [2024-07-13 08:21:03.738370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:12.147 [2024-07-13 08:21:03.738525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:12.147 [2024-07-13 08:21:03.738553] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:12.147 [2024-07-13 08:21:03.738568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:12.147 [2024-07-13 08:21:03.738582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f8fec000b90 00:34:12.147 [2024-07-13 08:21:03.738613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:12.147 qpair failed and we were unable to recover it. 00:34:12.147 [2024-07-13 08:21:03.738736] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:12.147 A controller has encountered a failure and is being reset. 00:34:12.147 Controller properly reset. 00:34:12.147 Initializing NVMe Controllers 00:34:12.147 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:12.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:12.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:12.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:12.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:12.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:12.147 Initialization complete. Launching workers. 00:34:12.147 Starting thread on core 1 00:34:12.147 Starting thread on core 2 00:34:12.147 Starting thread on core 3 00:34:12.147 Starting thread on core 0 00:34:12.147 08:21:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:12.147 00:34:12.147 real 0m10.727s 00:34:12.147 user 0m17.928s 00:34:12.147 sys 0m5.623s 00:34:12.147 08:21:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:12.147 08:21:03 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:12.147 ************************************ 00:34:12.147 END TEST nvmf_target_disconnect_tc2 00:34:12.147 ************************************ 00:34:12.147 08:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:12.147 08:21:03 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:12.147 08:21:03 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:12.147 08:21:03 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:12.147 08:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:12.147 08:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:12.148 08:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:12.148 08:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:12.148 08:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:12.148 08:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:12.148 rmmod nvme_tcp 00:34:12.148 rmmod nvme_fabrics 00:34:12.148 rmmod nvme_keyring 00:34:12.406 08:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:12.406 08:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:12.406 08:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:12.406 08:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2114569 ']' 00:34:12.406 08:21:03 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2114569 00:34:12.406 08:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 2114569 ']' 00:34:12.406 08:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 2114569 00:34:12.406 08:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:34:12.406 08:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:12.406 08:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2114569 00:34:12.406 08:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:34:12.406 08:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:34:12.406 08:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2114569' 00:34:12.406 killing process with pid 2114569 00:34:12.406 08:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 2114569 00:34:12.406 08:21:03 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 2114569 00:34:12.666 08:21:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:12.666 08:21:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:12.666 08:21:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:12.666 08:21:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:12.666 08:21:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:12.666 08:21:04 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:12.666 08:21:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:12.666 08:21:04 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:14.578 08:21:06 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:14.578 00:34:14.578 real 0m15.377s 00:34:14.578 user 0m43.809s 00:34:14.578 sys 0m7.516s 00:34:14.578 08:21:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:14.578 08:21:06 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:14.578 ************************************ 00:34:14.578 END TEST nvmf_target_disconnect 00:34:14.578 ************************************ 00:34:14.578 08:21:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:14.578 08:21:06 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:14.578 08:21:06 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:14.578 08:21:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:14.578 08:21:06 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:14.578 00:34:14.578 real 27m6.351s 00:34:14.578 user 73m48.161s 00:34:14.578 sys 6m26.018s 00:34:14.578 08:21:06 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:14.578 08:21:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:14.578 ************************************ 00:34:14.578 END TEST nvmf_tcp 00:34:14.578 ************************************ 00:34:14.578 08:21:06 -- common/autotest_common.sh@1142 -- # return 0 00:34:14.578 08:21:06 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:14.578 08:21:06 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:14.578 08:21:06 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:14.578 08:21:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:14.578 08:21:06 -- common/autotest_common.sh@10 -- # set +x 00:34:14.578 ************************************ 00:34:14.578 START TEST spdkcli_nvmf_tcp 00:34:14.578 ************************************ 00:34:14.578 08:21:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:14.837 * Looking for test storage... 00:34:14.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:14.837 08:21:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:14.837 08:21:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:14.837 08:21:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:14.837 08:21:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:14.837 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:14.837 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:14.837 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:14.837 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:14.837 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:14.837 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:14.837 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:14.837 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:14.837 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:14.837 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:14.837 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2115876 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2115876 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 2115876 ']' 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:14.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:14.838 08:21:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:14.838 [2024-07-13 08:21:06.421489] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:14.838 [2024-07-13 08:21:06.421571] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2115876 ] 00:34:14.838 EAL: No free 2048 kB hugepages reported on node 1 00:34:14.838 [2024-07-13 08:21:06.478512] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:14.838 [2024-07-13 08:21:06.563836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:14.838 [2024-07-13 08:21:06.563839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:15.096 08:21:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:15.097 08:21:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:34:15.097 08:21:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:15.097 08:21:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:15.097 08:21:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:15.097 08:21:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:15.097 08:21:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:15.097 08:21:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:15.097 08:21:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:15.097 08:21:06 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:15.097 08:21:06 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:15.097 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:15.097 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:15.097 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:15.097 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:15.097 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:15.097 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:15.097 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:15.097 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:15.097 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:15.097 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:15.097 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:15.097 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:15.097 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:15.097 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:15.097 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:15.097 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:15.097 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:15.097 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:15.097 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:15.097 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:15.097 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:15.097 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:15.097 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:15.097 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:15.097 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:15.097 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:15.097 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:15.097 ' 00:34:17.630 [2024-07-13 08:21:09.267026] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:19.004 [2024-07-13 08:21:10.507432] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:21.537 [2024-07-13 08:21:12.794607] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:23.436 [2024-07-13 08:21:14.769067] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:24.811 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:24.811 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:24.811 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:24.811 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:24.811 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:24.811 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:24.811 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:24.811 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:24.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:24.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:24.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:24.811 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:24.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:24.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:24.811 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:24.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:24.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:24.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:24.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:24.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:24.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:24.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:24.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:24.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:24.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:24.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:24.811 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:24.811 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:24.811 08:21:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:24.812 08:21:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:24.812 08:21:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:24.812 08:21:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:24.812 08:21:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:24.812 08:21:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:24.812 08:21:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:24.812 08:21:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:25.070 08:21:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:25.329 08:21:16 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:25.329 08:21:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:25.329 08:21:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:25.329 08:21:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:25.329 08:21:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:25.329 08:21:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:25.329 08:21:16 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:25.329 08:21:16 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:25.329 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:25.329 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:25.329 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:25.329 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:25.329 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:25.329 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:25.329 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:25.329 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:25.329 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:25.329 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:25.329 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:25.329 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:25.329 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:25.329 ' 00:34:30.603 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:30.603 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:30.603 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:30.603 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:30.603 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:30.603 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:30.603 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:30.603 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:30.603 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:30.603 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:30.603 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:30.603 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:30.603 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:30.603 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:30.603 08:21:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:30.603 08:21:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:30.603 08:21:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:30.603 08:21:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2115876 00:34:30.603 08:21:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2115876 ']' 00:34:30.603 08:21:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2115876 00:34:30.603 08:21:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:34:30.603 08:21:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:30.603 08:21:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2115876 00:34:30.603 08:21:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:30.603 08:21:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:30.603 08:21:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2115876' 00:34:30.603 killing process with pid 2115876 00:34:30.603 08:21:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 2115876 00:34:30.603 08:21:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 2115876 00:34:30.863 08:21:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:30.863 08:21:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:30.863 08:21:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2115876 ']' 00:34:30.863 08:21:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2115876 00:34:30.863 08:21:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 2115876 ']' 00:34:30.863 08:21:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 2115876 00:34:30.863 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2115876) - No such process 00:34:30.863 08:21:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 2115876 is not found' 00:34:30.863 Process with pid 2115876 is not found 00:34:30.863 08:21:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:30.863 08:21:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:30.863 08:21:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:30.863 00:34:30.863 real 0m16.069s 00:34:30.863 user 0m34.063s 00:34:30.863 sys 0m0.828s 00:34:30.864 08:21:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:30.864 08:21:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:30.864 ************************************ 00:34:30.864 END TEST spdkcli_nvmf_tcp 00:34:30.864 ************************************ 00:34:30.864 08:21:22 -- common/autotest_common.sh@1142 -- # return 0 00:34:30.864 08:21:22 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:30.864 08:21:22 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:30.864 08:21:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:30.864 08:21:22 -- common/autotest_common.sh@10 -- # set +x 00:34:30.864 ************************************ 00:34:30.864 START TEST nvmf_identify_passthru 00:34:30.864 ************************************ 00:34:30.864 08:21:22 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:30.864 * Looking for test storage... 00:34:30.864 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:30.864 08:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:30.864 08:21:22 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:30.864 08:21:22 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:30.864 08:21:22 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:30.864 08:21:22 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.864 08:21:22 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.864 08:21:22 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.864 08:21:22 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:30.864 08:21:22 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:30.864 08:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:30.864 08:21:22 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:30.864 08:21:22 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:30.864 08:21:22 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:30.864 08:21:22 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.864 08:21:22 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.864 08:21:22 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.864 08:21:22 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:30.864 08:21:22 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:30.864 08:21:22 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:30.864 08:21:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:30.864 08:21:22 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:30.864 08:21:22 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:30.864 08:21:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:32.767 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:32.767 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:32.767 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:32.768 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:32.768 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:32.768 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:33.027 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:33.027 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:33.027 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:33.027 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:33.027 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:33.027 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:33.027 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:33.027 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:33.027 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:34:33.027 00:34:33.027 --- 10.0.0.2 ping statistics --- 00:34:33.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.027 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:34:33.027 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:33.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:33.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:34:33.027 00:34:33.027 --- 10.0.0.1 ping statistics --- 00:34:33.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:33.027 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:34:33.027 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:33.027 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:33.027 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:33.027 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:33.027 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:33.027 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:33.027 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:33.027 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:33.027 08:21:24 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:33.027 08:21:24 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:33.027 08:21:24 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:33.027 08:21:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:33.027 08:21:24 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:33.027 08:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:34:33.027 08:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:34:33.027 08:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:34:33.027 08:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:34:33.027 08:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:34:33.027 08:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:34:33.027 08:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:33.027 08:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:33.027 08:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:34:33.027 08:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:34:33.027 08:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:34:33.027 08:21:24 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:34:33.027 08:21:24 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:34:33.027 08:21:24 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:34:33.027 08:21:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:33.027 08:21:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:33.027 08:21:24 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:33.027 EAL: No free 2048 kB hugepages reported on node 1 00:34:37.215 08:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:34:37.215 08:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:37.215 08:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:37.215 08:21:28 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:37.484 EAL: No free 2048 kB hugepages reported on node 1 00:34:41.670 08:21:33 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:41.670 08:21:33 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:41.670 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:41.670 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:41.670 08:21:33 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:41.670 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:41.670 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:41.670 08:21:33 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2120993 00:34:41.670 08:21:33 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:41.670 08:21:33 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:41.670 08:21:33 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2120993 00:34:41.670 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 2120993 ']' 00:34:41.670 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:41.670 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:41.670 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:41.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:41.670 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:41.670 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:41.670 [2024-07-13 08:21:33.215382] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:41.671 [2024-07-13 08:21:33.215465] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:41.671 EAL: No free 2048 kB hugepages reported on node 1 00:34:41.671 [2024-07-13 08:21:33.280019] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:41.671 [2024-07-13 08:21:33.370307] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:41.671 [2024-07-13 08:21:33.370360] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:41.671 [2024-07-13 08:21:33.370374] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:41.671 [2024-07-13 08:21:33.370384] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:41.671 [2024-07-13 08:21:33.370394] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:41.671 [2024-07-13 08:21:33.370516] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:41.671 [2024-07-13 08:21:33.370971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:41.671 [2024-07-13 08:21:33.371033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:41.671 [2024-07-13 08:21:33.371036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:41.927 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:41.927 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:34:41.927 08:21:33 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:41.927 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.927 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:41.927 INFO: Log level set to 20 00:34:41.927 INFO: Requests: 00:34:41.927 { 00:34:41.927 "jsonrpc": "2.0", 00:34:41.927 "method": "nvmf_set_config", 00:34:41.927 "id": 1, 00:34:41.927 "params": { 00:34:41.927 "admin_cmd_passthru": { 00:34:41.927 "identify_ctrlr": true 00:34:41.927 } 00:34:41.927 } 00:34:41.927 } 00:34:41.927 00:34:41.927 INFO: response: 00:34:41.927 { 00:34:41.927 "jsonrpc": "2.0", 00:34:41.927 "id": 1, 00:34:41.927 "result": true 00:34:41.927 } 00:34:41.927 00:34:41.927 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.927 08:21:33 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:41.927 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.927 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:41.927 INFO: Setting log level to 20 00:34:41.927 INFO: Setting log level to 20 00:34:41.927 INFO: Log level set to 20 00:34:41.927 INFO: Log level set to 20 00:34:41.927 INFO: Requests: 00:34:41.927 { 00:34:41.927 "jsonrpc": "2.0", 00:34:41.927 "method": "framework_start_init", 00:34:41.927 "id": 1 00:34:41.927 } 00:34:41.927 00:34:41.927 INFO: Requests: 00:34:41.927 { 00:34:41.927 "jsonrpc": "2.0", 00:34:41.927 "method": "framework_start_init", 00:34:41.927 "id": 1 00:34:41.927 } 00:34:41.927 00:34:41.927 [2024-07-13 08:21:33.538130] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:41.927 INFO: response: 00:34:41.927 { 00:34:41.927 "jsonrpc": "2.0", 00:34:41.927 "id": 1, 00:34:41.927 "result": true 00:34:41.927 } 00:34:41.927 00:34:41.927 INFO: response: 00:34:41.927 { 00:34:41.927 "jsonrpc": "2.0", 00:34:41.927 "id": 1, 00:34:41.927 "result": true 00:34:41.927 } 00:34:41.927 00:34:41.927 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.927 08:21:33 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:41.927 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.927 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:41.927 INFO: Setting log level to 40 00:34:41.927 INFO: Setting log level to 40 00:34:41.927 INFO: Setting log level to 40 00:34:41.927 [2024-07-13 08:21:33.548048] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:41.927 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:41.927 08:21:33 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:41.927 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:41.927 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:41.927 08:21:33 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:34:41.927 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:41.927 08:21:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:45.214 Nvme0n1 00:34:45.214 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.214 08:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:45.214 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.214 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:45.214 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.214 08:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:45.214 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.214 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:45.214 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.214 08:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:45.214 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.214 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:45.214 [2024-07-13 08:21:36.432129] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:45.214 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.214 08:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:45.214 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.214 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:45.214 [ 00:34:45.214 { 00:34:45.214 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:45.214 "subtype": "Discovery", 00:34:45.214 "listen_addresses": [], 00:34:45.214 "allow_any_host": true, 00:34:45.214 "hosts": [] 00:34:45.214 }, 00:34:45.214 { 00:34:45.214 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:45.214 "subtype": "NVMe", 00:34:45.214 "listen_addresses": [ 00:34:45.214 { 00:34:45.214 "trtype": "TCP", 00:34:45.214 "adrfam": "IPv4", 00:34:45.214 "traddr": "10.0.0.2", 00:34:45.214 "trsvcid": "4420" 00:34:45.214 } 00:34:45.214 ], 00:34:45.214 "allow_any_host": true, 00:34:45.214 "hosts": [], 00:34:45.214 "serial_number": "SPDK00000000000001", 00:34:45.214 "model_number": "SPDK bdev Controller", 00:34:45.214 "max_namespaces": 1, 00:34:45.214 "min_cntlid": 1, 00:34:45.214 "max_cntlid": 65519, 00:34:45.214 "namespaces": [ 00:34:45.214 { 00:34:45.214 "nsid": 1, 00:34:45.214 "bdev_name": "Nvme0n1", 00:34:45.214 "name": "Nvme0n1", 00:34:45.214 "nguid": "B1DA5604AEE6480B8503221ADE5C717F", 00:34:45.214 "uuid": "b1da5604-aee6-480b-8503-221ade5c717f" 00:34:45.214 } 00:34:45.214 ] 00:34:45.214 } 00:34:45.214 ] 00:34:45.214 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.214 08:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:45.214 08:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:45.214 08:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:45.214 EAL: No free 2048 kB hugepages reported on node 1 00:34:45.214 08:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:34:45.214 08:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:45.214 08:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:45.214 08:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:45.214 EAL: No free 2048 kB hugepages reported on node 1 00:34:45.214 08:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:45.214 08:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:34:45.214 08:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:45.215 08:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:45.215 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:45.215 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:45.215 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:45.215 08:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:45.215 08:21:36 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:45.215 08:21:36 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:45.215 08:21:36 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:34:45.215 08:21:36 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:45.215 08:21:36 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:34:45.215 08:21:36 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:45.215 08:21:36 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:45.215 rmmod nvme_tcp 00:34:45.215 rmmod nvme_fabrics 00:34:45.215 rmmod nvme_keyring 00:34:45.215 08:21:36 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:45.215 08:21:36 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:34:45.215 08:21:36 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:34:45.215 08:21:36 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2120993 ']' 00:34:45.215 08:21:36 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2120993 00:34:45.215 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 2120993 ']' 00:34:45.215 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 2120993 00:34:45.215 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:34:45.215 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:45.215 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2120993 00:34:45.215 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:45.215 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:45.215 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2120993' 00:34:45.215 killing process with pid 2120993 00:34:45.215 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 2120993 00:34:45.215 08:21:36 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 2120993 00:34:47.120 08:21:38 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:47.120 08:21:38 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:47.120 08:21:38 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:47.120 08:21:38 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:47.120 08:21:38 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:47.120 08:21:38 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:47.120 08:21:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:47.120 08:21:38 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.027 08:21:40 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:49.027 00:34:49.027 real 0m18.025s 00:34:49.027 user 0m26.608s 00:34:49.027 sys 0m2.344s 00:34:49.027 08:21:40 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:49.027 08:21:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:49.027 ************************************ 00:34:49.027 END TEST nvmf_identify_passthru 00:34:49.027 ************************************ 00:34:49.027 08:21:40 -- common/autotest_common.sh@1142 -- # return 0 00:34:49.027 08:21:40 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:49.027 08:21:40 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:49.027 08:21:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:49.027 08:21:40 -- common/autotest_common.sh@10 -- # set +x 00:34:49.027 ************************************ 00:34:49.027 START TEST nvmf_dif 00:34:49.027 ************************************ 00:34:49.027 08:21:40 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:49.027 * Looking for test storage... 00:34:49.027 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:49.027 08:21:40 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:49.027 08:21:40 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:49.027 08:21:40 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:49.027 08:21:40 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:49.027 08:21:40 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.027 08:21:40 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.027 08:21:40 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.027 08:21:40 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:49.027 08:21:40 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:49.027 08:21:40 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:49.027 08:21:40 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:49.027 08:21:40 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:49.027 08:21:40 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:49.027 08:21:40 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.027 08:21:40 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:49.027 08:21:40 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:49.027 08:21:40 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:34:49.027 08:21:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:50.930 08:21:42 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:50.930 08:21:42 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:34:50.930 08:21:42 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:50.930 08:21:42 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:50.930 08:21:42 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:50.930 08:21:42 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:50.930 08:21:42 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:50.930 08:21:42 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:34:50.930 08:21:42 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:50.930 08:21:42 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:50.931 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:50.931 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:50.931 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:50.931 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:50.931 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:50.931 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:34:50.931 00:34:50.931 --- 10.0.0.2 ping statistics --- 00:34:50.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.931 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:50.931 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:50.931 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.088 ms 00:34:50.931 00:34:50.931 --- 10.0.0.1 ping statistics --- 00:34:50.931 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.931 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:34:50.931 08:21:42 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:52.305 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:52.305 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:52.305 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:52.305 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:52.305 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:52.305 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:52.305 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:52.305 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:52.305 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:52.305 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:52.305 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:52.305 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:52.305 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:52.305 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:52.305 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:52.305 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:52.305 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:52.305 08:21:43 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:52.305 08:21:43 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:52.305 08:21:43 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:52.305 08:21:43 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:52.305 08:21:43 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:52.305 08:21:43 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:52.305 08:21:43 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:52.305 08:21:43 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:52.305 08:21:43 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:52.305 08:21:43 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:52.305 08:21:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:52.305 08:21:43 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2124135 00:34:52.305 08:21:43 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:52.305 08:21:43 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2124135 00:34:52.305 08:21:43 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 2124135 ']' 00:34:52.305 08:21:43 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:52.305 08:21:43 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:52.305 08:21:43 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:52.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:52.305 08:21:43 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:52.305 08:21:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:52.305 [2024-07-13 08:21:43.890313] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:34:52.305 [2024-07-13 08:21:43.890380] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:52.305 EAL: No free 2048 kB hugepages reported on node 1 00:34:52.305 [2024-07-13 08:21:43.956393] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:52.562 [2024-07-13 08:21:44.046247] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:52.562 [2024-07-13 08:21:44.046311] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:52.562 [2024-07-13 08:21:44.046327] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:52.562 [2024-07-13 08:21:44.046341] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:52.562 [2024-07-13 08:21:44.046352] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:52.562 [2024-07-13 08:21:44.046383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:52.562 08:21:44 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:52.562 08:21:44 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:34:52.562 08:21:44 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:52.563 08:21:44 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:52.563 08:21:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:52.563 08:21:44 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:52.563 08:21:44 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:52.563 08:21:44 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:52.563 08:21:44 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.563 08:21:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:52.563 [2024-07-13 08:21:44.190099] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:52.563 08:21:44 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.563 08:21:44 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:52.563 08:21:44 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:52.563 08:21:44 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:52.563 08:21:44 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:52.563 ************************************ 00:34:52.563 START TEST fio_dif_1_default 00:34:52.563 ************************************ 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:52.563 bdev_null0 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:52.563 [2024-07-13 08:21:44.250408] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:52.563 { 00:34:52.563 "params": { 00:34:52.563 "name": "Nvme$subsystem", 00:34:52.563 "trtype": "$TEST_TRANSPORT", 00:34:52.563 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:52.563 "adrfam": "ipv4", 00:34:52.563 "trsvcid": "$NVMF_PORT", 00:34:52.563 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:52.563 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:52.563 "hdgst": ${hdgst:-false}, 00:34:52.563 "ddgst": ${ddgst:-false} 00:34:52.563 }, 00:34:52.563 "method": "bdev_nvme_attach_controller" 00:34:52.563 } 00:34:52.563 EOF 00:34:52.563 )") 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:52.563 "params": { 00:34:52.563 "name": "Nvme0", 00:34:52.563 "trtype": "tcp", 00:34:52.563 "traddr": "10.0.0.2", 00:34:52.563 "adrfam": "ipv4", 00:34:52.563 "trsvcid": "4420", 00:34:52.563 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:52.563 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:52.563 "hdgst": false, 00:34:52.563 "ddgst": false 00:34:52.563 }, 00:34:52.563 "method": "bdev_nvme_attach_controller" 00:34:52.563 }' 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:34:52.563 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:34:52.821 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:34:52.821 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:34:52.821 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:52.821 08:21:44 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:52.821 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:52.821 fio-3.35 00:34:52.821 Starting 1 thread 00:34:52.821 EAL: No free 2048 kB hugepages reported on node 1 00:35:05.029 00:35:05.030 filename0: (groupid=0, jobs=1): err= 0: pid=2124363: Sat Jul 13 08:21:55 2024 00:35:05.030 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10003msec) 00:35:05.030 slat (nsec): min=6867, max=61204, avg=8697.50, stdev=2814.01 00:35:05.030 clat (usec): min=769, max=46370, avg=21075.29, stdev=20122.01 00:35:05.030 lat (usec): min=778, max=46406, avg=21083.98, stdev=20121.85 00:35:05.030 clat percentiles (usec): 00:35:05.030 | 1.00th=[ 824], 5.00th=[ 848], 10.00th=[ 857], 20.00th=[ 865], 00:35:05.030 | 30.00th=[ 873], 40.00th=[ 889], 50.00th=[40633], 60.00th=[41157], 00:35:05.030 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:05.030 | 99.00th=[41157], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:35:05.030 | 99.99th=[46400] 00:35:05.030 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=759.58, stdev=25.78, samples=19 00:35:05.030 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:35:05.030 lat (usec) : 1000=49.53% 00:35:05.030 lat (msec) : 2=0.26%, 50=50.21% 00:35:05.030 cpu : usr=89.77%, sys=9.96%, ctx=33, majf=0, minf=256 00:35:05.030 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:05.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:05.030 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:05.030 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:05.030 00:35:05.030 Run status group 0 (all jobs): 00:35:05.030 READ: bw=758KiB/s (776kB/s), 758KiB/s-758KiB/s (776kB/s-776kB/s), io=7584KiB (7766kB), run=10003-10003msec 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.030 00:35:05.030 real 0m11.199s 00:35:05.030 user 0m10.158s 00:35:05.030 sys 0m1.257s 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:05.030 ************************************ 00:35:05.030 END TEST fio_dif_1_default 00:35:05.030 ************************************ 00:35:05.030 08:21:55 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:05.030 08:21:55 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:05.030 08:21:55 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:05.030 08:21:55 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:05.030 08:21:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:05.030 ************************************ 00:35:05.030 START TEST fio_dif_1_multi_subsystems 00:35:05.030 ************************************ 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:05.030 bdev_null0 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:05.030 [2024-07-13 08:21:55.502528] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:05.030 bdev_null1 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.030 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:05.031 { 00:35:05.031 "params": { 00:35:05.031 "name": "Nvme$subsystem", 00:35:05.031 "trtype": "$TEST_TRANSPORT", 00:35:05.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:05.031 "adrfam": "ipv4", 00:35:05.031 "trsvcid": "$NVMF_PORT", 00:35:05.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:05.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:05.031 "hdgst": ${hdgst:-false}, 00:35:05.031 "ddgst": ${ddgst:-false} 00:35:05.031 }, 00:35:05.031 "method": "bdev_nvme_attach_controller" 00:35:05.031 } 00:35:05.031 EOF 00:35:05.031 )") 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:05.031 { 00:35:05.031 "params": { 00:35:05.031 "name": "Nvme$subsystem", 00:35:05.031 "trtype": "$TEST_TRANSPORT", 00:35:05.031 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:05.031 "adrfam": "ipv4", 00:35:05.031 "trsvcid": "$NVMF_PORT", 00:35:05.031 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:05.031 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:05.031 "hdgst": ${hdgst:-false}, 00:35:05.031 "ddgst": ${ddgst:-false} 00:35:05.031 }, 00:35:05.031 "method": "bdev_nvme_attach_controller" 00:35:05.031 } 00:35:05.031 EOF 00:35:05.031 )") 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:05.031 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:05.031 "params": { 00:35:05.031 "name": "Nvme0", 00:35:05.031 "trtype": "tcp", 00:35:05.031 "traddr": "10.0.0.2", 00:35:05.031 "adrfam": "ipv4", 00:35:05.031 "trsvcid": "4420", 00:35:05.031 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:05.031 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:05.031 "hdgst": false, 00:35:05.031 "ddgst": false 00:35:05.031 }, 00:35:05.031 "method": "bdev_nvme_attach_controller" 00:35:05.031 },{ 00:35:05.031 "params": { 00:35:05.031 "name": "Nvme1", 00:35:05.031 "trtype": "tcp", 00:35:05.031 "traddr": "10.0.0.2", 00:35:05.031 "adrfam": "ipv4", 00:35:05.031 "trsvcid": "4420", 00:35:05.031 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:05.031 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:05.031 "hdgst": false, 00:35:05.031 "ddgst": false 00:35:05.031 }, 00:35:05.031 "method": "bdev_nvme_attach_controller" 00:35:05.031 }' 00:35:05.032 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:05.032 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:05.032 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:05.032 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:05.032 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:05.032 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:05.032 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:05.032 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:05.032 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:05.032 08:21:55 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:05.032 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:05.032 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:05.032 fio-3.35 00:35:05.032 Starting 2 threads 00:35:05.032 EAL: No free 2048 kB hugepages reported on node 1 00:35:15.003 00:35:15.003 filename0: (groupid=0, jobs=1): err= 0: pid=2125760: Sat Jul 13 08:22:06 2024 00:35:15.003 read: IOPS=143, BW=572KiB/s (586kB/s)(5728KiB/10008msec) 00:35:15.003 slat (nsec): min=4908, max=93436, avg=10984.08, stdev=6380.88 00:35:15.003 clat (usec): min=717, max=46274, avg=27918.95, stdev=18886.72 00:35:15.003 lat (usec): min=725, max=46290, avg=27929.94, stdev=18887.15 00:35:15.003 clat percentiles (usec): 00:35:15.003 | 1.00th=[ 734], 5.00th=[ 742], 10.00th=[ 750], 20.00th=[ 807], 00:35:15.003 | 30.00th=[ 963], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:15.003 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:15.003 | 99.00th=[41681], 99.50th=[41681], 99.90th=[46400], 99.95th=[46400], 00:35:15.003 | 99.99th=[46400] 00:35:15.003 bw ( KiB/s): min= 384, max= 768, per=59.46%, avg=571.20, stdev=185.22, samples=20 00:35:15.003 iops : min= 96, max= 192, avg=142.80, stdev=46.31, samples=20 00:35:15.003 lat (usec) : 750=9.08%, 1000=23.32% 00:35:15.003 lat (msec) : 2=0.28%, 50=67.32% 00:35:15.003 cpu : usr=96.56%, sys=3.17%, ctx=17, majf=0, minf=220 00:35:15.003 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.003 issued rwts: total=1432,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.003 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:15.003 filename1: (groupid=0, jobs=1): err= 0: pid=2125761: Sat Jul 13 08:22:06 2024 00:35:15.003 read: IOPS=97, BW=388KiB/s (398kB/s)(3888KiB/10013msec) 00:35:15.004 slat (nsec): min=4982, max=56168, avg=11779.45, stdev=5990.88 00:35:15.004 clat (usec): min=40805, max=46270, avg=41164.80, stdev=501.83 00:35:15.004 lat (usec): min=40813, max=46284, avg=41176.58, stdev=502.31 00:35:15.004 clat percentiles (usec): 00:35:15.004 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:15.004 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:15.004 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:35:15.004 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:35:15.004 | 99.99th=[46400] 00:35:15.004 bw ( KiB/s): min= 352, max= 416, per=40.30%, avg=387.20, stdev=14.31, samples=20 00:35:15.004 iops : min= 88, max= 104, avg=96.80, stdev= 3.58, samples=20 00:35:15.004 lat (msec) : 50=100.00% 00:35:15.004 cpu : usr=96.31%, sys=3.29%, ctx=51, majf=0, minf=135 00:35:15.004 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:15.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:15.004 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:15.004 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:15.004 00:35:15.004 Run status group 0 (all jobs): 00:35:15.004 READ: bw=960KiB/s (983kB/s), 388KiB/s-572KiB/s (398kB/s-586kB/s), io=9616KiB (9847kB), run=10008-10013msec 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.266 00:35:15.266 real 0m11.413s 00:35:15.266 user 0m20.775s 00:35:15.266 sys 0m0.934s 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:15.266 08:22:06 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:15.266 ************************************ 00:35:15.266 END TEST fio_dif_1_multi_subsystems 00:35:15.266 ************************************ 00:35:15.266 08:22:06 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:15.266 08:22:06 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:15.266 08:22:06 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:15.266 08:22:06 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:15.266 08:22:06 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:15.266 ************************************ 00:35:15.266 START TEST fio_dif_rand_params 00:35:15.266 ************************************ 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.266 bdev_null0 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:15.266 [2024-07-13 08:22:06.955278] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:15.266 { 00:35:15.266 "params": { 00:35:15.266 "name": "Nvme$subsystem", 00:35:15.266 "trtype": "$TEST_TRANSPORT", 00:35:15.266 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:15.266 "adrfam": "ipv4", 00:35:15.266 "trsvcid": "$NVMF_PORT", 00:35:15.266 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:15.266 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:15.266 "hdgst": ${hdgst:-false}, 00:35:15.266 "ddgst": ${ddgst:-false} 00:35:15.266 }, 00:35:15.266 "method": "bdev_nvme_attach_controller" 00:35:15.266 } 00:35:15.266 EOF 00:35:15.266 )") 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:15.266 "params": { 00:35:15.266 "name": "Nvme0", 00:35:15.266 "trtype": "tcp", 00:35:15.266 "traddr": "10.0.0.2", 00:35:15.266 "adrfam": "ipv4", 00:35:15.266 "trsvcid": "4420", 00:35:15.266 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:15.266 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:15.266 "hdgst": false, 00:35:15.266 "ddgst": false 00:35:15.266 }, 00:35:15.266 "method": "bdev_nvme_attach_controller" 00:35:15.266 }' 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:15.266 08:22:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:15.524 08:22:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:15.524 08:22:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:15.524 08:22:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:15.524 08:22:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:15.524 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:15.525 ... 00:35:15.525 fio-3.35 00:35:15.525 Starting 3 threads 00:35:15.525 EAL: No free 2048 kB hugepages reported on node 1 00:35:22.082 00:35:22.082 filename0: (groupid=0, jobs=1): err= 0: pid=2127158: Sat Jul 13 08:22:12 2024 00:35:22.082 read: IOPS=189, BW=23.7MiB/s (24.8MB/s)(118MiB/5003msec) 00:35:22.082 slat (nsec): min=7076, max=43358, avg=14303.36, stdev=4993.69 00:35:22.082 clat (usec): min=5279, max=56694, avg=15829.18, stdev=14450.05 00:35:22.082 lat (usec): min=5291, max=56730, avg=15843.49, stdev=14450.11 00:35:22.082 clat percentiles (usec): 00:35:22.082 | 1.00th=[ 5735], 5.00th=[ 6325], 10.00th=[ 7177], 20.00th=[ 8356], 00:35:22.082 | 30.00th=[ 8979], 40.00th=[ 9634], 50.00th=[10814], 60.00th=[11731], 00:35:22.082 | 70.00th=[12387], 80.00th=[13042], 90.00th=[50070], 95.00th=[52167], 00:35:22.082 | 99.00th=[53740], 99.50th=[55313], 99.90th=[56886], 99.95th=[56886], 00:35:22.082 | 99.99th=[56886] 00:35:22.082 bw ( KiB/s): min=15616, max=35584, per=30.78%, avg=24166.40, stdev=6496.78, samples=10 00:35:22.082 iops : min= 122, max= 278, avg=188.80, stdev=50.76, samples=10 00:35:22.082 lat (msec) : 10=42.45%, 20=43.61%, 50=3.48%, 100=10.45% 00:35:22.082 cpu : usr=92.58%, sys=6.86%, ctx=7, majf=0, minf=109 00:35:22.082 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:22.082 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.082 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.082 issued rwts: total=947,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.082 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:22.082 filename0: (groupid=0, jobs=1): err= 0: pid=2127159: Sat Jul 13 08:22:12 2024 00:35:22.082 read: IOPS=178, BW=22.3MiB/s (23.4MB/s)(113MiB/5045msec) 00:35:22.082 slat (nsec): min=7116, max=84630, avg=14524.16, stdev=5301.21 00:35:22.082 clat (usec): min=5270, max=93985, avg=16730.97, stdev=15017.57 00:35:22.082 lat (usec): min=5282, max=94006, avg=16745.49, stdev=15017.91 00:35:22.082 clat percentiles (usec): 00:35:22.082 | 1.00th=[ 5669], 5.00th=[ 6325], 10.00th=[ 7570], 20.00th=[ 8717], 00:35:22.082 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[11600], 60.00th=[12649], 00:35:22.082 | 70.00th=[13435], 80.00th=[14615], 90.00th=[50070], 95.00th=[52691], 00:35:22.082 | 99.00th=[54789], 99.50th=[56886], 99.90th=[93848], 99.95th=[93848], 00:35:22.082 | 99.99th=[93848] 00:35:22.082 bw ( KiB/s): min=10752, max=33024, per=29.29%, avg=22994.10, stdev=6215.59, samples=10 00:35:22.082 iops : min= 84, max= 258, avg=179.60, stdev=48.53, samples=10 00:35:22.082 lat (msec) : 10=38.73%, 20=47.06%, 50=3.44%, 100=10.77% 00:35:22.082 cpu : usr=92.47%, sys=7.00%, ctx=15, majf=0, minf=179 00:35:22.083 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:22.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.083 issued rwts: total=901,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.083 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:22.083 filename0: (groupid=0, jobs=1): err= 0: pid=2127160: Sat Jul 13 08:22:12 2024 00:35:22.083 read: IOPS=246, BW=30.9MiB/s (32.4MB/s)(156MiB/5045msec) 00:35:22.083 slat (nsec): min=4912, max=46674, avg=15407.40, stdev=5126.98 00:35:22.083 clat (usec): min=5044, max=91657, avg=12089.59, stdev=10293.60 00:35:22.083 lat (usec): min=5056, max=91667, avg=12104.99, stdev=10294.04 00:35:22.083 clat percentiles (usec): 00:35:22.083 | 1.00th=[ 5604], 5.00th=[ 5997], 10.00th=[ 6259], 20.00th=[ 7635], 00:35:22.083 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9372], 60.00th=[10290], 00:35:22.083 | 70.00th=[11600], 80.00th=[12649], 90.00th=[14877], 95.00th=[47449], 00:35:22.083 | 99.00th=[53216], 99.50th=[54264], 99.90th=[91751], 99.95th=[91751], 00:35:22.083 | 99.99th=[91751] 00:35:22.083 bw ( KiB/s): min=20992, max=40448, per=40.58%, avg=31853.50, stdev=6537.40, samples=10 00:35:22.083 iops : min= 164, max= 316, avg=248.80, stdev=51.04, samples=10 00:35:22.083 lat (msec) : 10=57.46%, 20=37.08%, 50=2.25%, 100=3.21% 00:35:22.083 cpu : usr=91.99%, sys=7.51%, ctx=20, majf=0, minf=116 00:35:22.083 IO depths : 1=1.7%, 2=98.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:22.083 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.083 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:22.083 issued rwts: total=1246,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:22.083 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:22.083 00:35:22.083 Run status group 0 (all jobs): 00:35:22.083 READ: bw=76.7MiB/s (80.4MB/s), 22.3MiB/s-30.9MiB/s (23.4MB/s-32.4MB/s), io=387MiB (406MB), run=5003-5045msec 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.083 bdev_null0 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.083 [2024-07-13 08:22:13.129419] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.083 bdev_null1 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.083 bdev_null2 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:22.083 { 00:35:22.083 "params": { 00:35:22.083 "name": "Nvme$subsystem", 00:35:22.083 "trtype": "$TEST_TRANSPORT", 00:35:22.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:22.083 "adrfam": "ipv4", 00:35:22.083 "trsvcid": "$NVMF_PORT", 00:35:22.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:22.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:22.083 "hdgst": ${hdgst:-false}, 00:35:22.083 "ddgst": ${ddgst:-false} 00:35:22.083 }, 00:35:22.083 "method": "bdev_nvme_attach_controller" 00:35:22.083 } 00:35:22.083 EOF 00:35:22.083 )") 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:22.083 { 00:35:22.083 "params": { 00:35:22.083 "name": "Nvme$subsystem", 00:35:22.083 "trtype": "$TEST_TRANSPORT", 00:35:22.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:22.083 "adrfam": "ipv4", 00:35:22.083 "trsvcid": "$NVMF_PORT", 00:35:22.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:22.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:22.083 "hdgst": ${hdgst:-false}, 00:35:22.083 "ddgst": ${ddgst:-false} 00:35:22.083 }, 00:35:22.083 "method": "bdev_nvme_attach_controller" 00:35:22.083 } 00:35:22.083 EOF 00:35:22.083 )") 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:22.083 { 00:35:22.083 "params": { 00:35:22.083 "name": "Nvme$subsystem", 00:35:22.083 "trtype": "$TEST_TRANSPORT", 00:35:22.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:22.083 "adrfam": "ipv4", 00:35:22.083 "trsvcid": "$NVMF_PORT", 00:35:22.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:22.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:22.083 "hdgst": ${hdgst:-false}, 00:35:22.083 "ddgst": ${ddgst:-false} 00:35:22.083 }, 00:35:22.083 "method": "bdev_nvme_attach_controller" 00:35:22.083 } 00:35:22.083 EOF 00:35:22.083 )") 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:22.083 08:22:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:22.083 "params": { 00:35:22.083 "name": "Nvme0", 00:35:22.083 "trtype": "tcp", 00:35:22.083 "traddr": "10.0.0.2", 00:35:22.083 "adrfam": "ipv4", 00:35:22.083 "trsvcid": "4420", 00:35:22.083 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:22.083 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:22.083 "hdgst": false, 00:35:22.083 "ddgst": false 00:35:22.083 }, 00:35:22.083 "method": "bdev_nvme_attach_controller" 00:35:22.083 },{ 00:35:22.083 "params": { 00:35:22.083 "name": "Nvme1", 00:35:22.083 "trtype": "tcp", 00:35:22.083 "traddr": "10.0.0.2", 00:35:22.083 "adrfam": "ipv4", 00:35:22.083 "trsvcid": "4420", 00:35:22.083 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:22.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:22.083 "hdgst": false, 00:35:22.083 "ddgst": false 00:35:22.083 }, 00:35:22.083 "method": "bdev_nvme_attach_controller" 00:35:22.083 },{ 00:35:22.083 "params": { 00:35:22.083 "name": "Nvme2", 00:35:22.083 "trtype": "tcp", 00:35:22.083 "traddr": "10.0.0.2", 00:35:22.083 "adrfam": "ipv4", 00:35:22.083 "trsvcid": "4420", 00:35:22.083 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:22.083 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:22.083 "hdgst": false, 00:35:22.084 "ddgst": false 00:35:22.084 }, 00:35:22.084 "method": "bdev_nvme_attach_controller" 00:35:22.084 }' 00:35:22.084 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:22.084 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:22.084 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:22.084 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:22.084 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:22.084 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:22.084 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:22.084 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:22.084 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:22.084 08:22:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:22.084 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:22.084 ... 00:35:22.084 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:22.084 ... 00:35:22.084 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:22.084 ... 00:35:22.084 fio-3.35 00:35:22.084 Starting 24 threads 00:35:22.084 EAL: No free 2048 kB hugepages reported on node 1 00:35:34.281 00:35:34.281 filename0: (groupid=0, jobs=1): err= 0: pid=2128024: Sat Jul 13 08:22:24 2024 00:35:34.281 read: IOPS=307, BW=1229KiB/s (1258kB/s)(12.1MiB/10052msec) 00:35:34.281 slat (usec): min=8, max=100, avg=20.78, stdev=17.21 00:35:34.281 clat (msec): min=10, max=276, avg=51.90, stdev=59.03 00:35:34.281 lat (msec): min=10, max=276, avg=51.92, stdev=59.03 00:35:34.281 clat percentiles (msec): 00:35:34.281 | 1.00th=[ 16], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.281 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:34.281 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 239], 00:35:34.281 | 99.00th=[ 264], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 275], 00:35:34.281 | 99.99th=[ 275] 00:35:34.281 bw ( KiB/s): min= 256, max= 2048, per=4.29%, avg=1228.80, stdev=815.27, samples=20 00:35:34.281 iops : min= 64, max= 512, avg=307.20, stdev=203.82, samples=20 00:35:34.281 lat (msec) : 20=1.04%, 50=89.64%, 250=6.22%, 500=3.11% 00:35:34.281 cpu : usr=97.58%, sys=1.94%, ctx=20, majf=0, minf=9 00:35:34.281 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:34.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.281 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.281 issued rwts: total=3088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.281 filename0: (groupid=0, jobs=1): err= 0: pid=2128025: Sat Jul 13 08:22:24 2024 00:35:34.281 read: IOPS=294, BW=1177KiB/s (1205kB/s)(11.6MiB/10110msec) 00:35:34.281 slat (usec): min=8, max=194, avg=26.50, stdev=21.76 00:35:34.281 clat (msec): min=20, max=441, avg=54.12, stdev=78.20 00:35:34.281 lat (msec): min=20, max=441, avg=54.15, stdev=78.20 00:35:34.281 clat percentiles (msec): 00:35:34.281 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.281 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:34.281 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 300], 00:35:34.281 | 99.00th=[ 397], 99.50th=[ 409], 99.90th=[ 443], 99.95th=[ 443], 00:35:34.281 | 99.99th=[ 443] 00:35:34.281 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1183.20, stdev=844.54, samples=20 00:35:34.281 iops : min= 32, max= 480, avg=295.80, stdev=211.14, samples=20 00:35:34.281 lat (msec) : 50=93.01%, 250=0.54%, 500=6.46% 00:35:34.281 cpu : usr=98.17%, sys=1.42%, ctx=17, majf=0, minf=9 00:35:34.281 IO depths : 1=1.0%, 2=7.3%, 4=25.0%, 8=55.2%, 16=11.5%, 32=0.0%, >=64=0.0% 00:35:34.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.281 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.281 issued rwts: total=2974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.281 filename0: (groupid=0, jobs=1): err= 0: pid=2128026: Sat Jul 13 08:22:24 2024 00:35:34.281 read: IOPS=300, BW=1204KiB/s (1232kB/s)(11.9MiB/10103msec) 00:35:34.281 slat (usec): min=8, max=157, avg=38.93, stdev=29.92 00:35:34.281 clat (msec): min=30, max=387, avg=52.68, stdev=63.20 00:35:34.281 lat (msec): min=30, max=387, avg=52.71, stdev=63.20 00:35:34.281 clat percentiles (msec): 00:35:34.281 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.281 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:34.281 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 236], 00:35:34.281 | 99.00th=[ 305], 99.50th=[ 359], 99.90th=[ 388], 99.95th=[ 388], 00:35:34.281 | 99.99th=[ 388] 00:35:34.281 bw ( KiB/s): min= 240, max= 1920, per=4.22%, avg=1209.60, stdev=811.13, samples=20 00:35:34.281 iops : min= 60, max= 480, avg=302.40, stdev=202.78, samples=20 00:35:34.281 lat (msec) : 50=91.05%, 250=5.00%, 500=3.95% 00:35:34.281 cpu : usr=98.14%, sys=1.47%, ctx=13, majf=0, minf=9 00:35:34.281 IO depths : 1=5.8%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.7%, 32=0.0%, >=64=0.0% 00:35:34.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.281 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.281 issued rwts: total=3040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.281 filename0: (groupid=0, jobs=1): err= 0: pid=2128027: Sat Jul 13 08:22:24 2024 00:35:34.281 read: IOPS=290, BW=1164KiB/s (1192kB/s)(11.5MiB/10083msec) 00:35:34.281 slat (usec): min=8, max=106, avg=38.38, stdev=24.70 00:35:34.281 clat (msec): min=25, max=503, avg=54.45, stdev=76.27 00:35:34.281 lat (msec): min=25, max=503, avg=54.49, stdev=76.26 00:35:34.281 clat percentiles (msec): 00:35:34.281 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.281 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:34.281 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 300], 00:35:34.281 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 468], 99.95th=[ 506], 00:35:34.281 | 99.99th=[ 506] 00:35:34.281 bw ( KiB/s): min= 128, max= 1968, per=4.09%, avg=1172.80, stdev=828.14, samples=20 00:35:34.281 iops : min= 32, max= 492, avg=293.20, stdev=207.03, samples=20 00:35:34.281 lat (msec) : 50=92.16%, 100=0.55%, 250=1.57%, 500=5.66%, 750=0.07% 00:35:34.281 cpu : usr=98.10%, sys=1.49%, ctx=18, majf=0, minf=9 00:35:34.281 IO depths : 1=0.2%, 2=1.4%, 4=4.6%, 8=76.6%, 16=17.2%, 32=0.0%, >=64=0.0% 00:35:34.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.281 complete : 0=0.0%, 4=90.3%, 8=8.6%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.281 issued rwts: total=2934,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.281 filename0: (groupid=0, jobs=1): err= 0: pid=2128028: Sat Jul 13 08:22:24 2024 00:35:34.281 read: IOPS=293, BW=1173KiB/s (1201kB/s)(11.6MiB/10097msec) 00:35:34.281 slat (nsec): min=8374, max=82529, avg=30140.57, stdev=11895.05 00:35:34.281 clat (msec): min=31, max=474, avg=54.32, stdev=75.91 00:35:34.281 lat (msec): min=31, max=474, avg=54.35, stdev=75.91 00:35:34.281 clat percentiles (msec): 00:35:34.281 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.281 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:34.281 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 305], 00:35:34.281 | 99.00th=[ 393], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 477], 00:35:34.281 | 99.99th=[ 477] 00:35:34.281 bw ( KiB/s): min= 128, max= 1920, per=4.11%, avg=1177.60, stdev=839.34, samples=20 00:35:34.281 iops : min= 32, max= 480, avg=294.40, stdev=209.84, samples=20 00:35:34.281 lat (msec) : 50=91.89%, 100=0.54%, 250=1.69%, 500=5.88% 00:35:34.281 cpu : usr=97.75%, sys=1.66%, ctx=70, majf=0, minf=9 00:35:34.281 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:34.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.281 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.281 issued rwts: total=2960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.281 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.281 filename0: (groupid=0, jobs=1): err= 0: pid=2128029: Sat Jul 13 08:22:24 2024 00:35:34.281 read: IOPS=303, BW=1214KiB/s (1243kB/s)(12.0MiB/10125msec) 00:35:34.281 slat (usec): min=7, max=109, avg=68.31, stdev=20.56 00:35:34.281 clat (msec): min=19, max=330, avg=51.78, stdev=59.02 00:35:34.281 lat (msec): min=19, max=330, avg=51.85, stdev=59.00 00:35:34.281 clat percentiles (msec): 00:35:34.281 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.281 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:34.281 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 236], 00:35:34.281 | 99.00th=[ 257], 99.50th=[ 264], 99.90th=[ 264], 99.95th=[ 330], 00:35:34.281 | 99.99th=[ 330] 00:35:34.281 bw ( KiB/s): min= 256, max= 1920, per=4.26%, avg=1222.40, stdev=808.87, samples=20 00:35:34.281 iops : min= 64, max= 480, avg=305.60, stdev=202.22, samples=20 00:35:34.281 lat (msec) : 20=0.07%, 50=90.56%, 250=6.38%, 500=2.99% 00:35:34.281 cpu : usr=98.13%, sys=1.43%, ctx=17, majf=0, minf=9 00:35:34.281 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:35:34.281 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.281 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.282 issued rwts: total=3072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.282 filename0: (groupid=0, jobs=1): err= 0: pid=2128030: Sat Jul 13 08:22:24 2024 00:35:34.282 read: IOPS=294, BW=1177KiB/s (1205kB/s)(11.5MiB/10009msec) 00:35:34.282 slat (nsec): min=8389, max=80953, avg=34875.59, stdev=11516.76 00:35:34.282 clat (msec): min=25, max=504, avg=54.09, stdev=76.78 00:35:34.282 lat (msec): min=25, max=504, avg=54.12, stdev=76.77 00:35:34.282 clat percentiles (msec): 00:35:34.282 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.282 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:34.282 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 305], 00:35:34.282 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 506], 00:35:34.282 | 99.99th=[ 506] 00:35:34.282 bw ( KiB/s): min= 112, max= 1920, per=4.09%, avg=1171.20, stdev=847.32, samples=20 00:35:34.282 iops : min= 28, max= 480, avg=292.80, stdev=211.83, samples=20 00:35:34.282 lat (msec) : 50=92.39%, 100=0.54%, 250=1.15%, 500=5.84%, 750=0.07% 00:35:34.282 cpu : usr=97.70%, sys=1.63%, ctx=123, majf=0, minf=9 00:35:34.282 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:34.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.282 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.282 issued rwts: total=2944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.282 filename0: (groupid=0, jobs=1): err= 0: pid=2128031: Sat Jul 13 08:22:24 2024 00:35:34.282 read: IOPS=307, BW=1229KiB/s (1258kB/s)(12.1MiB/10051msec) 00:35:34.282 slat (usec): min=8, max=125, avg=41.48, stdev=20.90 00:35:34.282 clat (msec): min=11, max=276, avg=51.71, stdev=59.10 00:35:34.282 lat (msec): min=11, max=276, avg=51.75, stdev=59.09 00:35:34.282 clat percentiles (msec): 00:35:34.282 | 1.00th=[ 14], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.282 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:34.282 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 236], 00:35:34.282 | 99.00th=[ 264], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 275], 00:35:34.282 | 99.99th=[ 275] 00:35:34.282 bw ( KiB/s): min= 256, max= 2052, per=4.29%, avg=1229.00, stdev=815.49, samples=20 00:35:34.282 iops : min= 64, max= 513, avg=307.25, stdev=203.87, samples=20 00:35:34.282 lat (msec) : 20=1.04%, 50=89.64%, 250=6.22%, 500=3.11% 00:35:34.282 cpu : usr=94.50%, sys=3.00%, ctx=264, majf=0, minf=9 00:35:34.282 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:34.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.282 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.282 issued rwts: total=3088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.282 filename1: (groupid=0, jobs=1): err= 0: pid=2128032: Sat Jul 13 08:22:24 2024 00:35:34.282 read: IOPS=292, BW=1171KiB/s (1199kB/s)(11.6MiB/10105msec) 00:35:34.282 slat (usec): min=8, max=110, avg=20.84, stdev=12.86 00:35:34.282 clat (msec): min=24, max=464, avg=54.43, stdev=76.07 00:35:34.282 lat (msec): min=24, max=464, avg=54.45, stdev=76.07 00:35:34.282 clat percentiles (msec): 00:35:34.282 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.282 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:34.282 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 300], 00:35:34.282 | 99.00th=[ 393], 99.50th=[ 397], 99.90th=[ 443], 99.95th=[ 464], 00:35:34.282 | 99.99th=[ 464] 00:35:34.282 bw ( KiB/s): min= 128, max= 1920, per=4.10%, avg=1176.80, stdev=840.39, samples=20 00:35:34.282 iops : min= 32, max= 480, avg=294.20, stdev=210.10, samples=20 00:35:34.282 lat (msec) : 50=91.95%, 100=0.54%, 250=1.62%, 500=5.88% 00:35:34.282 cpu : usr=98.05%, sys=1.56%, ctx=15, majf=0, minf=9 00:35:34.282 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:35:34.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.282 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.282 issued rwts: total=2958,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.282 filename1: (groupid=0, jobs=1): err= 0: pid=2128033: Sat Jul 13 08:22:24 2024 00:35:34.282 read: IOPS=291, BW=1168KiB/s (1196kB/s)(11.5MiB/10083msec) 00:35:34.282 slat (nsec): min=8214, max=66407, avg=30368.09, stdev=9268.76 00:35:34.282 clat (msec): min=31, max=484, avg=54.52, stdev=78.24 00:35:34.282 lat (msec): min=32, max=484, avg=54.55, stdev=78.23 00:35:34.282 clat percentiles (msec): 00:35:34.282 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.282 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:34.282 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 300], 00:35:34.282 | 99.00th=[ 405], 99.50th=[ 422], 99.90th=[ 456], 99.95th=[ 485], 00:35:34.282 | 99.99th=[ 485] 00:35:34.282 bw ( KiB/s): min= 128, max= 1920, per=4.09%, avg=1171.20, stdev=846.17, samples=20 00:35:34.282 iops : min= 32, max= 480, avg=292.80, stdev=211.54, samples=20 00:35:34.282 lat (msec) : 50=92.39%, 100=0.54%, 250=0.68%, 500=6.39% 00:35:34.282 cpu : usr=98.20%, sys=1.38%, ctx=32, majf=0, minf=9 00:35:34.282 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:34.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.282 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.282 issued rwts: total=2944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.282 filename1: (groupid=0, jobs=1): err= 0: pid=2128034: Sat Jul 13 08:22:24 2024 00:35:34.282 read: IOPS=294, BW=1177KiB/s (1205kB/s)(11.6MiB/10113msec) 00:35:34.282 slat (usec): min=10, max=115, avg=55.53, stdev=23.94 00:35:34.282 clat (msec): min=21, max=441, avg=53.86, stdev=78.80 00:35:34.282 lat (msec): min=21, max=441, avg=53.91, stdev=78.79 00:35:34.282 clat percentiles (msec): 00:35:34.282 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.282 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:35:34.282 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 300], 00:35:34.282 | 99.00th=[ 397], 99.50th=[ 409], 99.90th=[ 439], 99.95th=[ 443], 00:35:34.282 | 99.99th=[ 443] 00:35:34.282 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1184.00, stdev=848.71, samples=20 00:35:34.282 iops : min= 32, max= 480, avg=296.00, stdev=212.18, samples=20 00:35:34.282 lat (msec) : 50=93.01%, 250=0.54%, 500=6.45% 00:35:34.282 cpu : usr=98.40%, sys=1.19%, ctx=16, majf=0, minf=9 00:35:34.282 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:34.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.282 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.282 issued rwts: total=2976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.282 filename1: (groupid=0, jobs=1): err= 0: pid=2128035: Sat Jul 13 08:22:24 2024 00:35:34.282 read: IOPS=293, BW=1173KiB/s (1201kB/s)(11.6MiB/10095msec) 00:35:34.282 slat (usec): min=8, max=127, avg=34.99, stdev=13.36 00:35:34.282 clat (msec): min=31, max=474, avg=54.23, stdev=76.18 00:35:34.282 lat (msec): min=31, max=474, avg=54.27, stdev=76.18 00:35:34.282 clat percentiles (msec): 00:35:34.282 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.282 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:34.282 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 305], 00:35:34.282 | 99.00th=[ 393], 99.50th=[ 397], 99.90th=[ 468], 99.95th=[ 477], 00:35:34.282 | 99.99th=[ 477] 00:35:34.282 bw ( KiB/s): min= 128, max= 1920, per=4.11%, avg=1177.60, stdev=839.45, samples=20 00:35:34.282 iops : min= 32, max= 480, avg=294.40, stdev=209.86, samples=20 00:35:34.282 lat (msec) : 50=91.89%, 100=0.54%, 250=1.82%, 500=5.74% 00:35:34.282 cpu : usr=95.70%, sys=2.43%, ctx=117, majf=0, minf=9 00:35:34.282 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:34.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.282 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.282 issued rwts: total=2960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.282 filename1: (groupid=0, jobs=1): err= 0: pid=2128036: Sat Jul 13 08:22:24 2024 00:35:34.282 read: IOPS=304, BW=1219KiB/s (1249kB/s)(12.1MiB/10129msec) 00:35:34.282 slat (usec): min=7, max=100, avg=39.23, stdev=18.88 00:35:34.282 clat (msec): min=12, max=368, avg=51.79, stdev=59.35 00:35:34.282 lat (msec): min=12, max=368, avg=51.82, stdev=59.35 00:35:34.282 clat percentiles (msec): 00:35:34.282 | 1.00th=[ 15], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.282 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:34.282 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 236], 00:35:34.282 | 99.00th=[ 264], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 368], 00:35:34.282 | 99.99th=[ 368] 00:35:34.282 bw ( KiB/s): min= 256, max= 2048, per=4.29%, avg=1228.80, stdev=815.16, samples=20 00:35:34.282 iops : min= 64, max= 512, avg=307.20, stdev=203.79, samples=20 00:35:34.282 lat (msec) : 20=1.04%, 50=89.64%, 250=6.28%, 500=3.04% 00:35:34.282 cpu : usr=96.52%, sys=2.15%, ctx=46, majf=0, minf=9 00:35:34.282 IO depths : 1=5.8%, 2=12.0%, 4=25.0%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:35:34.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.282 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.282 issued rwts: total=3088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.282 filename1: (groupid=0, jobs=1): err= 0: pid=2128037: Sat Jul 13 08:22:24 2024 00:35:34.282 read: IOPS=305, BW=1223KiB/s (1252kB/s)(12.0MiB/10051msec) 00:35:34.282 slat (usec): min=8, max=173, avg=49.45, stdev=24.81 00:35:34.282 clat (msec): min=21, max=276, avg=51.91, stdev=59.16 00:35:34.282 lat (msec): min=21, max=276, avg=51.96, stdev=59.15 00:35:34.282 clat percentiles (msec): 00:35:34.282 | 1.00th=[ 26], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.282 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:35:34.282 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 236], 00:35:34.282 | 99.00th=[ 264], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 275], 00:35:34.282 | 99.99th=[ 275] 00:35:34.282 bw ( KiB/s): min= 256, max= 1923, per=4.26%, avg=1222.55, stdev=809.12, samples=20 00:35:34.282 iops : min= 64, max= 480, avg=305.60, stdev=202.25, samples=20 00:35:34.282 lat (msec) : 50=90.62%, 250=6.25%, 500=3.12% 00:35:34.282 cpu : usr=96.87%, sys=2.02%, ctx=105, majf=0, minf=9 00:35:34.282 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:34.282 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.282 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.282 issued rwts: total=3072,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.282 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.283 filename1: (groupid=0, jobs=1): err= 0: pid=2128038: Sat Jul 13 08:22:24 2024 00:35:34.283 read: IOPS=302, BW=1210KiB/s (1239kB/s)(11.9MiB/10103msec) 00:35:34.283 slat (usec): min=8, max=110, avg=58.83, stdev=28.54 00:35:34.283 clat (msec): min=30, max=334, avg=52.02, stdev=59.11 00:35:34.283 lat (msec): min=30, max=334, avg=52.08, stdev=59.10 00:35:34.283 clat percentiles (msec): 00:35:34.283 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.283 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:35:34.283 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 37], 95.00th=[ 236], 00:35:34.283 | 99.00th=[ 257], 99.50th=[ 264], 99.90th=[ 264], 99.95th=[ 334], 00:35:34.283 | 99.99th=[ 334] 00:35:34.283 bw ( KiB/s): min= 256, max= 1920, per=4.24%, avg=1216.00, stdev=803.55, samples=20 00:35:34.283 iops : min= 64, max= 480, avg=304.00, stdev=200.89, samples=20 00:35:34.283 lat (msec) : 50=90.58%, 250=6.48%, 500=2.95% 00:35:34.283 cpu : usr=98.27%, sys=1.31%, ctx=13, majf=0, minf=9 00:35:34.283 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:34.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.283 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.283 issued rwts: total=3056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.283 filename1: (groupid=0, jobs=1): err= 0: pid=2128039: Sat Jul 13 08:22:24 2024 00:35:34.283 read: IOPS=302, BW=1211KiB/s (1240kB/s)(12.0MiB/10131msec) 00:35:34.283 slat (usec): min=5, max=116, avg=45.63, stdev=22.21 00:35:34.283 clat (msec): min=21, max=381, avg=52.14, stdev=60.79 00:35:34.283 lat (msec): min=21, max=381, avg=52.19, stdev=60.78 00:35:34.283 clat percentiles (msec): 00:35:34.283 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.283 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:34.283 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 232], 00:35:34.283 | 99.00th=[ 279], 99.50th=[ 292], 99.90th=[ 372], 99.95th=[ 380], 00:35:34.283 | 99.99th=[ 380] 00:35:34.283 bw ( KiB/s): min= 256, max= 1923, per=4.26%, avg=1220.95, stdev=810.75, samples=20 00:35:34.283 iops : min= 64, max= 480, avg=305.20, stdev=202.65, samples=20 00:35:34.283 lat (msec) : 50=90.74%, 250=6.98%, 500=2.28% 00:35:34.283 cpu : usr=95.88%, sys=2.47%, ctx=49, majf=0, minf=9 00:35:34.283 IO depths : 1=5.6%, 2=11.4%, 4=23.6%, 8=52.5%, 16=6.9%, 32=0.0%, >=64=0.0% 00:35:34.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.283 complete : 0=0.0%, 4=93.7%, 8=0.5%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.283 issued rwts: total=3068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.283 filename2: (groupid=0, jobs=1): err= 0: pid=2128040: Sat Jul 13 08:22:24 2024 00:35:34.283 read: IOPS=294, BW=1177KiB/s (1205kB/s)(11.6MiB/10113msec) 00:35:34.283 slat (usec): min=14, max=114, avg=71.76, stdev=15.43 00:35:34.283 clat (msec): min=21, max=440, avg=53.72, stdev=78.69 00:35:34.283 lat (msec): min=21, max=440, avg=53.79, stdev=78.68 00:35:34.283 clat percentiles (msec): 00:35:34.283 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.283 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 33], 60.00th=[ 34], 00:35:34.283 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 300], 00:35:34.283 | 99.00th=[ 397], 99.50th=[ 409], 99.90th=[ 435], 99.95th=[ 439], 00:35:34.283 | 99.99th=[ 439] 00:35:34.283 bw ( KiB/s): min= 128, max= 1920, per=4.13%, avg=1184.00, stdev=848.93, samples=20 00:35:34.283 iops : min= 32, max= 480, avg=296.00, stdev=212.23, samples=20 00:35:34.283 lat (msec) : 50=93.01%, 250=0.54%, 500=6.45% 00:35:34.283 cpu : usr=97.90%, sys=1.44%, ctx=29, majf=0, minf=9 00:35:34.283 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:34.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.283 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.283 issued rwts: total=2976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.283 filename2: (groupid=0, jobs=1): err= 0: pid=2128041: Sat Jul 13 08:22:24 2024 00:35:34.283 read: IOPS=328, BW=1312KiB/s (1344kB/s)(12.8MiB/10006msec) 00:35:34.283 slat (usec): min=7, max=116, avg=29.00, stdev=23.46 00:35:34.283 clat (msec): min=13, max=510, avg=48.63, stdev=72.49 00:35:34.283 lat (msec): min=13, max=510, avg=48.66, stdev=72.49 00:35:34.283 clat percentiles (msec): 00:35:34.283 | 1.00th=[ 21], 5.00th=[ 21], 10.00th=[ 21], 20.00th=[ 22], 00:35:34.283 | 30.00th=[ 27], 40.00th=[ 30], 50.00th=[ 33], 60.00th=[ 34], 00:35:34.283 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 42], 95.00th=[ 236], 00:35:34.283 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 506], 99.95th=[ 510], 00:35:34.283 | 99.99th=[ 510] 00:35:34.283 bw ( KiB/s): min= 128, max= 2480, per=4.77%, avg=1368.42, stdev=943.21, samples=19 00:35:34.283 iops : min= 32, max= 620, avg=342.11, stdev=235.80, samples=19 00:35:34.283 lat (msec) : 20=0.67%, 50=91.96%, 100=0.73%, 250=2.07%, 500=4.45% 00:35:34.283 lat (msec) : 750=0.12% 00:35:34.283 cpu : usr=98.48%, sys=1.11%, ctx=14, majf=0, minf=9 00:35:34.283 IO depths : 1=0.2%, 2=1.2%, 4=7.1%, 8=77.3%, 16=14.3%, 32=0.0%, >=64=0.0% 00:35:34.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.283 complete : 0=0.0%, 4=89.7%, 8=6.7%, 16=3.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.283 issued rwts: total=3282,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.283 filename2: (groupid=0, jobs=1): err= 0: pid=2128042: Sat Jul 13 08:22:24 2024 00:35:34.283 read: IOPS=292, BW=1172KiB/s (1200kB/s)(11.6MiB/10104msec) 00:35:34.283 slat (nsec): min=5428, max=88029, avg=31226.47, stdev=11489.26 00:35:34.283 clat (msec): min=31, max=474, avg=54.32, stdev=76.18 00:35:34.283 lat (msec): min=31, max=474, avg=54.35, stdev=76.18 00:35:34.283 clat percentiles (msec): 00:35:34.283 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.283 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:34.283 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 305], 00:35:34.283 | 99.00th=[ 393], 99.50th=[ 397], 99.90th=[ 468], 99.95th=[ 477], 00:35:34.283 | 99.99th=[ 477] 00:35:34.283 bw ( KiB/s): min= 128, max= 1920, per=4.11%, avg=1177.60, stdev=839.34, samples=20 00:35:34.283 iops : min= 32, max= 480, avg=294.40, stdev=209.84, samples=20 00:35:34.283 lat (msec) : 50=91.89%, 100=0.54%, 250=1.82%, 500=5.74% 00:35:34.283 cpu : usr=96.35%, sys=2.16%, ctx=154, majf=0, minf=9 00:35:34.283 IO depths : 1=6.0%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:34.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.283 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.283 issued rwts: total=2960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.283 filename2: (groupid=0, jobs=1): err= 0: pid=2128043: Sat Jul 13 08:22:24 2024 00:35:34.283 read: IOPS=291, BW=1167KiB/s (1195kB/s)(11.5MiB/10095msec) 00:35:34.283 slat (nsec): min=4370, max=67660, avg=29947.58, stdev=10759.77 00:35:34.283 clat (msec): min=25, max=489, avg=54.59, stdev=78.57 00:35:34.283 lat (msec): min=25, max=489, avg=54.62, stdev=78.56 00:35:34.283 clat percentiles (msec): 00:35:34.283 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.283 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:34.283 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 300], 00:35:34.283 | 99.00th=[ 405], 99.50th=[ 422], 99.90th=[ 485], 99.95th=[ 489], 00:35:34.283 | 99.99th=[ 489] 00:35:34.283 bw ( KiB/s): min= 128, max= 1920, per=4.09%, avg=1171.00, stdev=847.07, samples=20 00:35:34.283 iops : min= 32, max= 480, avg=292.75, stdev=211.77, samples=20 00:35:34.283 lat (msec) : 50=92.39%, 100=0.54%, 250=0.75%, 500=6.32% 00:35:34.283 cpu : usr=93.50%, sys=3.49%, ctx=144, majf=0, minf=9 00:35:34.283 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:34.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.283 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.283 issued rwts: total=2944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.283 filename2: (groupid=0, jobs=1): err= 0: pid=2128044: Sat Jul 13 08:22:24 2024 00:35:34.283 read: IOPS=307, BW=1229KiB/s (1258kB/s)(12.1MiB/10051msec) 00:35:34.283 slat (nsec): min=8280, max=72400, avg=30479.88, stdev=12602.57 00:35:34.283 clat (msec): min=11, max=276, avg=51.81, stdev=59.08 00:35:34.283 lat (msec): min=11, max=276, avg=51.85, stdev=59.07 00:35:34.283 clat percentiles (msec): 00:35:34.283 | 1.00th=[ 14], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.283 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:34.283 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 236], 00:35:34.283 | 99.00th=[ 264], 99.50th=[ 275], 99.90th=[ 275], 99.95th=[ 279], 00:35:34.283 | 99.99th=[ 279] 00:35:34.283 bw ( KiB/s): min= 256, max= 2052, per=4.29%, avg=1229.00, stdev=815.49, samples=20 00:35:34.283 iops : min= 64, max= 513, avg=307.25, stdev=203.87, samples=20 00:35:34.283 lat (msec) : 20=1.04%, 50=89.64%, 250=6.22%, 500=3.11% 00:35:34.283 cpu : usr=98.22%, sys=1.38%, ctx=13, majf=0, minf=9 00:35:34.283 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:34.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.283 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.283 issued rwts: total=3088,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.283 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.283 filename2: (groupid=0, jobs=1): err= 0: pid=2128045: Sat Jul 13 08:22:24 2024 00:35:34.283 read: IOPS=304, BW=1218KiB/s (1247kB/s)(12.0MiB/10128msec) 00:35:34.283 slat (nsec): min=8287, max=80623, avg=31906.27, stdev=13436.53 00:35:34.283 clat (msec): min=11, max=376, avg=52.09, stdev=61.11 00:35:34.283 lat (msec): min=11, max=376, avg=52.12, stdev=61.11 00:35:34.283 clat percentiles (msec): 00:35:34.283 | 1.00th=[ 14], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.283 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:34.283 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 234], 00:35:34.283 | 99.00th=[ 275], 99.50th=[ 330], 99.90th=[ 376], 99.95th=[ 376], 00:35:34.283 | 99.99th=[ 376] 00:35:34.283 bw ( KiB/s): min= 192, max= 2052, per=4.28%, avg=1227.40, stdev=817.57, samples=20 00:35:34.283 iops : min= 48, max= 513, avg=306.85, stdev=204.39, samples=20 00:35:34.283 lat (msec) : 20=1.04%, 50=89.75%, 250=5.64%, 500=3.57% 00:35:34.283 cpu : usr=97.60%, sys=1.84%, ctx=65, majf=0, minf=9 00:35:34.283 IO depths : 1=5.7%, 2=11.7%, 4=24.2%, 8=51.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:34.283 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.283 complete : 0=0.0%, 4=93.9%, 8=0.3%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.283 issued rwts: total=3084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.284 filename2: (groupid=0, jobs=1): err= 0: pid=2128046: Sat Jul 13 08:22:24 2024 00:35:34.284 read: IOPS=293, BW=1173KiB/s (1201kB/s)(11.6MiB/10097msec) 00:35:34.284 slat (usec): min=8, max=162, avg=46.87, stdev=30.40 00:35:34.284 clat (msec): min=19, max=441, avg=54.15, stdev=78.45 00:35:34.284 lat (msec): min=19, max=441, avg=54.20, stdev=78.44 00:35:34.284 clat percentiles (msec): 00:35:34.284 | 1.00th=[ 32], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.284 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:34.284 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 35], 95.00th=[ 300], 00:35:34.284 | 99.00th=[ 397], 99.50th=[ 409], 99.90th=[ 439], 99.95th=[ 443], 00:35:34.284 | 99.99th=[ 443] 00:35:34.284 bw ( KiB/s): min= 128, max= 1920, per=4.11%, avg=1177.60, stdev=843.33, samples=20 00:35:34.284 iops : min= 32, max= 480, avg=294.40, stdev=210.83, samples=20 00:35:34.284 lat (msec) : 20=0.07%, 50=92.36%, 100=0.54%, 250=0.54%, 500=6.49% 00:35:34.284 cpu : usr=98.16%, sys=1.42%, ctx=16, majf=0, minf=9 00:35:34.284 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:34.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.284 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.284 issued rwts: total=2960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.284 filename2: (groupid=0, jobs=1): err= 0: pid=2128047: Sat Jul 13 08:22:24 2024 00:35:34.284 read: IOPS=301, BW=1207KiB/s (1236kB/s)(11.9MiB/10124msec) 00:35:34.284 slat (usec): min=8, max=115, avg=63.72, stdev=23.89 00:35:34.284 clat (msec): min=21, max=402, avg=52.09, stdev=59.65 00:35:34.284 lat (msec): min=21, max=402, avg=52.15, stdev=59.63 00:35:34.284 clat percentiles (msec): 00:35:34.284 | 1.00th=[ 32], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:35:34.284 | 30.00th=[ 33], 40.00th=[ 33], 50.00th=[ 34], 60.00th=[ 34], 00:35:34.284 | 70.00th=[ 34], 80.00th=[ 34], 90.00th=[ 36], 95.00th=[ 236], 00:35:34.284 | 99.00th=[ 259], 99.50th=[ 279], 99.90th=[ 279], 99.95th=[ 401], 00:35:34.284 | 99.99th=[ 401] 00:35:34.284 bw ( KiB/s): min= 256, max= 1920, per=4.24%, avg=1216.15, stdev=803.66, samples=20 00:35:34.284 iops : min= 64, max= 480, avg=304.00, stdev=200.89, samples=20 00:35:34.284 lat (msec) : 50=90.05%, 100=0.52%, 250=6.41%, 500=3.01% 00:35:34.284 cpu : usr=98.29%, sys=1.24%, ctx=27, majf=0, minf=9 00:35:34.284 IO depths : 1=5.7%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:34.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.284 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:34.284 issued rwts: total=3056,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:34.284 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:34.284 00:35:34.284 Run status group 0 (all jobs): 00:35:34.284 READ: bw=28.0MiB/s (29.3MB/s), 1164KiB/s-1312KiB/s (1192kB/s-1344kB/s), io=283MiB (297MB), run=10006-10131msec 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.284 bdev_null0 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.284 [2024-07-13 08:22:24.946703] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.284 bdev_null1 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:34.284 08:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:34.284 { 00:35:34.284 "params": { 00:35:34.284 "name": "Nvme$subsystem", 00:35:34.284 "trtype": "$TEST_TRANSPORT", 00:35:34.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:34.284 "adrfam": "ipv4", 00:35:34.284 "trsvcid": "$NVMF_PORT", 00:35:34.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:34.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:34.284 "hdgst": ${hdgst:-false}, 00:35:34.285 "ddgst": ${ddgst:-false} 00:35:34.285 }, 00:35:34.285 "method": "bdev_nvme_attach_controller" 00:35:34.285 } 00:35:34.285 EOF 00:35:34.285 )") 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:34.285 { 00:35:34.285 "params": { 00:35:34.285 "name": "Nvme$subsystem", 00:35:34.285 "trtype": "$TEST_TRANSPORT", 00:35:34.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:34.285 "adrfam": "ipv4", 00:35:34.285 "trsvcid": "$NVMF_PORT", 00:35:34.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:34.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:34.285 "hdgst": ${hdgst:-false}, 00:35:34.285 "ddgst": ${ddgst:-false} 00:35:34.285 }, 00:35:34.285 "method": "bdev_nvme_attach_controller" 00:35:34.285 } 00:35:34.285 EOF 00:35:34.285 )") 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:34.285 08:22:24 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:34.285 "params": { 00:35:34.285 "name": "Nvme0", 00:35:34.285 "trtype": "tcp", 00:35:34.285 "traddr": "10.0.0.2", 00:35:34.285 "adrfam": "ipv4", 00:35:34.285 "trsvcid": "4420", 00:35:34.285 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:34.285 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:34.285 "hdgst": false, 00:35:34.285 "ddgst": false 00:35:34.285 }, 00:35:34.285 "method": "bdev_nvme_attach_controller" 00:35:34.285 },{ 00:35:34.285 "params": { 00:35:34.285 "name": "Nvme1", 00:35:34.285 "trtype": "tcp", 00:35:34.285 "traddr": "10.0.0.2", 00:35:34.285 "adrfam": "ipv4", 00:35:34.285 "trsvcid": "4420", 00:35:34.285 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:34.285 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:34.285 "hdgst": false, 00:35:34.285 "ddgst": false 00:35:34.285 }, 00:35:34.285 "method": "bdev_nvme_attach_controller" 00:35:34.285 }' 00:35:34.285 08:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:34.285 08:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:34.285 08:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:34.285 08:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:34.285 08:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:34.285 08:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:34.285 08:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:34.285 08:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:34.285 08:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:34.285 08:22:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:34.285 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:34.285 ... 00:35:34.285 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:34.285 ... 00:35:34.285 fio-3.35 00:35:34.285 Starting 4 threads 00:35:34.285 EAL: No free 2048 kB hugepages reported on node 1 00:35:39.596 00:35:39.596 filename0: (groupid=0, jobs=1): err= 0: pid=2129374: Sat Jul 13 08:22:30 2024 00:35:39.596 read: IOPS=1900, BW=14.8MiB/s (15.6MB/s)(74.3MiB/5001msec) 00:35:39.596 slat (nsec): min=3971, max=62243, avg=14861.06, stdev=7591.39 00:35:39.596 clat (usec): min=913, max=9551, avg=4158.09, stdev=567.85 00:35:39.596 lat (usec): min=933, max=9582, avg=4172.96, stdev=568.05 00:35:39.596 clat percentiles (usec): 00:35:39.596 | 1.00th=[ 2638], 5.00th=[ 3425], 10.00th=[ 3687], 20.00th=[ 3884], 00:35:39.596 | 30.00th=[ 3982], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4178], 00:35:39.596 | 70.00th=[ 4228], 80.00th=[ 4359], 90.00th=[ 4621], 95.00th=[ 5080], 00:35:39.596 | 99.00th=[ 6325], 99.50th=[ 6521], 99.90th=[ 7504], 99.95th=[ 9241], 00:35:39.596 | 99.99th=[ 9503] 00:35:39.596 bw ( KiB/s): min=14592, max=15472, per=25.08%, avg=15146.67, stdev=254.75, samples=9 00:35:39.596 iops : min= 1824, max= 1934, avg=1893.33, stdev=31.84, samples=9 00:35:39.596 lat (usec) : 1000=0.02% 00:35:39.596 lat (msec) : 2=0.21%, 4=31.80%, 10=67.96% 00:35:39.596 cpu : usr=94.70%, sys=4.58%, ctx=6, majf=0, minf=9 00:35:39.596 IO depths : 1=0.3%, 2=8.8%, 4=64.3%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:39.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.596 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.596 issued rwts: total=9505,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.596 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:39.596 filename0: (groupid=0, jobs=1): err= 0: pid=2129375: Sat Jul 13 08:22:30 2024 00:35:39.596 read: IOPS=1875, BW=14.6MiB/s (15.4MB/s)(73.3MiB/5001msec) 00:35:39.596 slat (nsec): min=4527, max=64411, avg=16855.23, stdev=8409.18 00:35:39.596 clat (usec): min=1354, max=7252, avg=4212.36, stdev=611.94 00:35:39.596 lat (usec): min=1379, max=7268, avg=4229.22, stdev=611.60 00:35:39.596 clat percentiles (usec): 00:35:39.596 | 1.00th=[ 2835], 5.00th=[ 3392], 10.00th=[ 3687], 20.00th=[ 3884], 00:35:39.596 | 30.00th=[ 4015], 40.00th=[ 4080], 50.00th=[ 4146], 60.00th=[ 4228], 00:35:39.596 | 70.00th=[ 4293], 80.00th=[ 4359], 90.00th=[ 4883], 95.00th=[ 5407], 00:35:39.596 | 99.00th=[ 6521], 99.50th=[ 6652], 99.90th=[ 7046], 99.95th=[ 7177], 00:35:39.596 | 99.99th=[ 7242] 00:35:39.596 bw ( KiB/s): min=14608, max=15568, per=24.89%, avg=15032.89, stdev=318.21, samples=9 00:35:39.596 iops : min= 1826, max= 1946, avg=1879.11, stdev=39.78, samples=9 00:35:39.596 lat (msec) : 2=0.15%, 4=29.03%, 10=70.83% 00:35:39.596 cpu : usr=88.98%, sys=7.28%, ctx=486, majf=0, minf=9 00:35:39.596 IO depths : 1=0.1%, 2=6.9%, 4=65.4%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:39.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.596 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.596 issued rwts: total=9378,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.596 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:39.596 filename1: (groupid=0, jobs=1): err= 0: pid=2129376: Sat Jul 13 08:22:30 2024 00:35:39.596 read: IOPS=1836, BW=14.3MiB/s (15.0MB/s)(71.8MiB/5002msec) 00:35:39.596 slat (nsec): min=4123, max=57942, avg=14687.91, stdev=7424.80 00:35:39.596 clat (usec): min=899, max=7647, avg=4308.02, stdev=655.28 00:35:39.596 lat (usec): min=914, max=7655, avg=4322.71, stdev=654.58 00:35:39.596 clat percentiles (usec): 00:35:39.596 | 1.00th=[ 2966], 5.00th=[ 3621], 10.00th=[ 3785], 20.00th=[ 3949], 00:35:39.596 | 30.00th=[ 4080], 40.00th=[ 4113], 50.00th=[ 4178], 60.00th=[ 4228], 00:35:39.596 | 70.00th=[ 4293], 80.00th=[ 4490], 90.00th=[ 5145], 95.00th=[ 5866], 00:35:39.596 | 99.00th=[ 6587], 99.50th=[ 6718], 99.90th=[ 7177], 99.95th=[ 7373], 00:35:39.596 | 99.99th=[ 7635] 00:35:39.596 bw ( KiB/s): min=14208, max=15120, per=24.28%, avg=14663.11, stdev=244.90, samples=9 00:35:39.596 iops : min= 1776, max= 1890, avg=1832.89, stdev=30.61, samples=9 00:35:39.596 lat (usec) : 1000=0.01% 00:35:39.596 lat (msec) : 2=0.10%, 4=23.75%, 10=76.14% 00:35:39.596 cpu : usr=95.14%, sys=4.36%, ctx=9, majf=0, minf=9 00:35:39.596 IO depths : 1=0.1%, 2=7.0%, 4=65.4%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:39.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.596 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.596 issued rwts: total=9187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.596 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:39.596 filename1: (groupid=0, jobs=1): err= 0: pid=2129377: Sat Jul 13 08:22:30 2024 00:35:39.596 read: IOPS=1938, BW=15.1MiB/s (15.9MB/s)(75.8MiB/5003msec) 00:35:39.596 slat (nsec): min=3857, max=65466, avg=15710.34, stdev=8011.20 00:35:39.596 clat (usec): min=777, max=7893, avg=4074.00, stdev=554.45 00:35:39.596 lat (usec): min=789, max=7920, avg=4089.71, stdev=554.93 00:35:39.596 clat percentiles (usec): 00:35:39.596 | 1.00th=[ 2606], 5.00th=[ 3130], 10.00th=[ 3490], 20.00th=[ 3818], 00:35:39.596 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4113], 60.00th=[ 4178], 00:35:39.596 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4490], 95.00th=[ 4948], 00:35:39.596 | 99.00th=[ 6063], 99.50th=[ 6390], 99.90th=[ 7701], 99.95th=[ 7898], 00:35:39.596 | 99.99th=[ 7898] 00:35:39.596 bw ( KiB/s): min=14621, max=16384, per=25.68%, avg=15508.50, stdev=452.64, samples=10 00:35:39.596 iops : min= 1827, max= 2048, avg=1938.50, stdev=56.72, samples=10 00:35:39.596 lat (usec) : 1000=0.05% 00:35:39.596 lat (msec) : 2=0.16%, 4=38.52%, 10=61.26% 00:35:39.596 cpu : usr=94.04%, sys=5.18%, ctx=12, majf=0, minf=9 00:35:39.596 IO depths : 1=0.2%, 2=8.9%, 4=64.3%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:39.596 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.596 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.596 issued rwts: total=9699,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.596 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:39.596 00:35:39.596 Run status group 0 (all jobs): 00:35:39.596 READ: bw=59.0MiB/s (61.8MB/s), 14.3MiB/s-15.1MiB/s (15.0MB/s-15.9MB/s), io=295MiB (309MB), run=5001-5003msec 00:35:39.596 08:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:39.596 08:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:39.596 08:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:39.596 08:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:39.596 08:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:39.596 08:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:39.596 08:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.596 08:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.596 08:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.596 08:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:39.596 08:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.596 08:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.596 08:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.597 08:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:39.597 08:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:39.597 08:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:39.597 08:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:39.597 08:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.597 08:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.597 08:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.597 08:22:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:39.597 08:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.597 08:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.597 08:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.597 00:35:39.597 real 0m24.276s 00:35:39.597 user 4m32.872s 00:35:39.597 sys 0m7.421s 00:35:39.597 08:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:39.597 08:22:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.597 ************************************ 00:35:39.597 END TEST fio_dif_rand_params 00:35:39.597 ************************************ 00:35:39.597 08:22:31 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:39.597 08:22:31 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:39.597 08:22:31 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:39.597 08:22:31 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:39.597 08:22:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:39.597 ************************************ 00:35:39.597 START TEST fio_dif_digest 00:35:39.597 ************************************ 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:39.597 bdev_null0 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:39.597 [2024-07-13 08:22:31.285777] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:39.597 { 00:35:39.597 "params": { 00:35:39.597 "name": "Nvme$subsystem", 00:35:39.597 "trtype": "$TEST_TRANSPORT", 00:35:39.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:39.597 "adrfam": "ipv4", 00:35:39.597 "trsvcid": "$NVMF_PORT", 00:35:39.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:39.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:39.597 "hdgst": ${hdgst:-false}, 00:35:39.597 "ddgst": ${ddgst:-false} 00:35:39.597 }, 00:35:39.597 "method": "bdev_nvme_attach_controller" 00:35:39.597 } 00:35:39.597 EOF 00:35:39.597 )") 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:39.597 "params": { 00:35:39.597 "name": "Nvme0", 00:35:39.597 "trtype": "tcp", 00:35:39.597 "traddr": "10.0.0.2", 00:35:39.597 "adrfam": "ipv4", 00:35:39.597 "trsvcid": "4420", 00:35:39.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:39.597 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:39.597 "hdgst": true, 00:35:39.597 "ddgst": true 00:35:39.597 }, 00:35:39.597 "method": "bdev_nvme_attach_controller" 00:35:39.597 }' 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:39.597 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:39.855 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:39.855 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:39.855 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:39.855 08:22:31 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:39.855 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:39.855 ... 00:35:39.855 fio-3.35 00:35:39.855 Starting 3 threads 00:35:39.855 EAL: No free 2048 kB hugepages reported on node 1 00:35:52.048 00:35:52.048 filename0: (groupid=0, jobs=1): err= 0: pid=2130175: Sat Jul 13 08:22:42 2024 00:35:52.048 read: IOPS=256, BW=32.1MiB/s (33.7MB/s)(323MiB/10046msec) 00:35:52.048 slat (nsec): min=4358, max=33763, avg=14660.86, stdev=2134.47 00:35:52.048 clat (usec): min=5670, max=56178, avg=11640.88, stdev=2024.90 00:35:52.048 lat (usec): min=5683, max=56202, avg=11655.54, stdev=2025.02 00:35:52.048 clat percentiles (usec): 00:35:52.048 | 1.00th=[ 8225], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9896], 00:35:52.048 | 30.00th=[10552], 40.00th=[11207], 50.00th=[11863], 60.00th=[12256], 00:35:52.048 | 70.00th=[12649], 80.00th=[13173], 90.00th=[13698], 95.00th=[14091], 00:35:52.048 | 99.00th=[15008], 99.50th=[15401], 99.90th=[19006], 99.95th=[50070], 00:35:52.048 | 99.99th=[56361] 00:35:52.048 bw ( KiB/s): min=29952, max=35072, per=42.44%, avg=33011.20, stdev=1432.53, samples=20 00:35:52.048 iops : min= 234, max= 274, avg=257.90, stdev=11.19, samples=20 00:35:52.048 lat (msec) : 10=21.66%, 20=78.26%, 50=0.04%, 100=0.04% 00:35:52.048 cpu : usr=92.17%, sys=7.32%, ctx=15, majf=0, minf=60 00:35:52.048 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:52.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.048 issued rwts: total=2581,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.048 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:52.048 filename0: (groupid=0, jobs=1): err= 0: pid=2130176: Sat Jul 13 08:22:42 2024 00:35:52.048 read: IOPS=123, BW=15.5MiB/s (16.2MB/s)(156MiB/10048msec) 00:35:52.048 slat (nsec): min=5687, max=44972, avg=19636.30, stdev=3335.78 00:35:52.048 clat (msec): min=9, max=101, avg=24.17, stdev=16.05 00:35:52.048 lat (msec): min=9, max=101, avg=24.19, stdev=16.05 00:35:52.048 clat percentiles (msec): 00:35:52.048 | 1.00th=[ 14], 5.00th=[ 16], 10.00th=[ 16], 20.00th=[ 17], 00:35:52.048 | 30.00th=[ 17], 40.00th=[ 18], 50.00th=[ 18], 60.00th=[ 18], 00:35:52.048 | 70.00th=[ 19], 80.00th=[ 21], 90.00th=[ 59], 95.00th=[ 60], 00:35:52.048 | 99.00th=[ 62], 99.50th=[ 99], 99.90th=[ 102], 99.95th=[ 103], 00:35:52.048 | 99.99th=[ 103] 00:35:52.048 bw ( KiB/s): min=10240, max=23040, per=20.42%, avg=15884.80, stdev=2920.59, samples=20 00:35:52.048 iops : min= 80, max= 180, avg=124.10, stdev=22.82, samples=20 00:35:52.048 lat (msec) : 10=0.08%, 20=80.14%, 50=3.86%, 100=15.59%, 250=0.32% 00:35:52.048 cpu : usr=92.15%, sys=7.05%, ctx=77, majf=0, minf=105 00:35:52.048 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:52.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.048 issued rwts: total=1244,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.048 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:52.048 filename0: (groupid=0, jobs=1): err= 0: pid=2130177: Sat Jul 13 08:22:42 2024 00:35:52.048 read: IOPS=227, BW=28.4MiB/s (29.8MB/s)(285MiB/10047msec) 00:35:52.048 slat (nsec): min=5103, max=35925, avg=14168.95, stdev=2045.50 00:35:52.048 clat (usec): min=8355, max=56962, avg=13177.11, stdev=3269.12 00:35:52.048 lat (usec): min=8369, max=56977, avg=13191.28, stdev=3269.18 00:35:52.048 clat percentiles (usec): 00:35:52.048 | 1.00th=[ 8979], 5.00th=[ 9634], 10.00th=[ 9896], 20.00th=[10421], 00:35:52.048 | 30.00th=[11469], 40.00th=[12911], 50.00th=[13566], 60.00th=[14091], 00:35:52.048 | 70.00th=[14484], 80.00th=[14877], 90.00th=[15533], 95.00th=[16057], 00:35:52.048 | 99.00th=[17171], 99.50th=[17695], 99.90th=[56361], 99.95th=[56886], 00:35:52.048 | 99.99th=[56886] 00:35:52.048 bw ( KiB/s): min=24064, max=32512, per=37.49%, avg=29161.05, stdev=1981.00, samples=20 00:35:52.048 iops : min= 188, max= 254, avg=227.80, stdev=15.50, samples=20 00:35:52.048 lat (msec) : 10=11.49%, 20=88.03%, 50=0.18%, 100=0.31% 00:35:52.048 cpu : usr=91.85%, sys=7.52%, ctx=15, majf=0, minf=155 00:35:52.048 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:52.048 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.048 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:52.048 issued rwts: total=2281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:52.048 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:52.048 00:35:52.048 Run status group 0 (all jobs): 00:35:52.048 READ: bw=76.0MiB/s (79.7MB/s), 15.5MiB/s-32.1MiB/s (16.2MB/s-33.7MB/s), io=763MiB (800MB), run=10046-10048msec 00:35:52.048 08:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:52.048 08:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:52.048 08:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:52.048 08:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:52.048 08:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:52.048 08:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:52.048 08:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.048 08:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:52.048 08:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.048 08:22:42 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:52.048 08:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:52.048 08:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:52.048 08:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:52.048 00:35:52.048 real 0m11.049s 00:35:52.048 user 0m28.636s 00:35:52.048 sys 0m2.488s 00:35:52.048 08:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:52.048 08:22:42 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:52.048 ************************************ 00:35:52.048 END TEST fio_dif_digest 00:35:52.048 ************************************ 00:35:52.048 08:22:42 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:52.048 08:22:42 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:52.048 08:22:42 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:52.048 08:22:42 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:52.048 08:22:42 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:35:52.048 08:22:42 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:52.048 08:22:42 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:35:52.048 08:22:42 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:52.048 08:22:42 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:52.048 rmmod nvme_tcp 00:35:52.048 rmmod nvme_fabrics 00:35:52.048 rmmod nvme_keyring 00:35:52.048 08:22:42 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:52.048 08:22:42 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:35:52.048 08:22:42 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:35:52.048 08:22:42 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2124135 ']' 00:35:52.048 08:22:42 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2124135 00:35:52.048 08:22:42 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 2124135 ']' 00:35:52.048 08:22:42 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 2124135 00:35:52.048 08:22:42 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:35:52.048 08:22:42 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:52.048 08:22:42 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2124135 00:35:52.048 08:22:42 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:52.048 08:22:42 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:52.048 08:22:42 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2124135' 00:35:52.048 killing process with pid 2124135 00:35:52.048 08:22:42 nvmf_dif -- common/autotest_common.sh@967 -- # kill 2124135 00:35:52.048 08:22:42 nvmf_dif -- common/autotest_common.sh@972 -- # wait 2124135 00:35:52.048 08:22:42 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:35:52.048 08:22:42 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:52.048 Waiting for block devices as requested 00:35:52.049 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:35:52.307 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:52.307 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:52.307 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:52.565 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:52.565 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:52.565 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:52.565 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:52.824 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:52.824 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:52.824 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:52.824 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:53.083 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:53.083 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:53.083 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:53.340 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:53.340 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:53.340 08:22:45 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:53.340 08:22:45 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:53.340 08:22:45 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:53.340 08:22:45 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:53.340 08:22:45 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.340 08:22:45 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:53.340 08:22:45 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:55.868 08:22:47 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:55.868 00:35:55.868 real 1m6.538s 00:35:55.868 user 6m29.221s 00:35:55.868 sys 0m18.818s 00:35:55.868 08:22:47 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:55.868 08:22:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:55.868 ************************************ 00:35:55.868 END TEST nvmf_dif 00:35:55.868 ************************************ 00:35:55.868 08:22:47 -- common/autotest_common.sh@1142 -- # return 0 00:35:55.868 08:22:47 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:55.868 08:22:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:55.868 08:22:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:55.868 08:22:47 -- common/autotest_common.sh@10 -- # set +x 00:35:55.868 ************************************ 00:35:55.868 START TEST nvmf_abort_qd_sizes 00:35:55.868 ************************************ 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:55.869 * Looking for test storage... 00:35:55.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:35:55.869 08:22:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:57.770 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:57.770 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:35:57.770 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:57.770 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:57.770 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:57.770 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:57.770 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:57.770 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:35:57.770 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:57.770 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:35:57.770 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:35:57.770 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:35:57.770 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:35:57.770 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:35:57.770 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:35:57.770 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:57.770 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:57.770 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:57.770 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:57.770 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:57.771 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:57.771 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:57.771 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:57.771 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:57.771 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:57.771 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:35:57.771 00:35:57.771 --- 10.0.0.2 ping statistics --- 00:35:57.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:57.771 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:57.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:57.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:35:57.771 00:35:57.771 --- 10.0.0.1 ping statistics --- 00:35:57.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:57.771 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:57.771 08:22:49 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:58.705 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:58.705 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:58.705 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:58.705 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:58.705 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:58.705 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:58.705 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:58.705 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:58.705 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:58.705 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:58.705 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:58.705 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:58.705 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:58.705 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:58.705 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:58.705 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:59.640 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:59.899 08:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:59.899 08:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:59.899 08:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:59.899 08:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:59.899 08:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:59.899 08:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:59.899 08:22:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:59.899 08:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:59.899 08:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:59.899 08:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:59.899 08:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2134972 00:35:59.899 08:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:59.899 08:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2134972 00:35:59.899 08:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 2134972 ']' 00:35:59.899 08:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:59.899 08:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:59.899 08:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:59.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:59.899 08:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:59.899 08:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:59.899 [2024-07-13 08:22:51.524027] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:35:59.899 [2024-07-13 08:22:51.524096] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:59.899 EAL: No free 2048 kB hugepages reported on node 1 00:35:59.899 [2024-07-13 08:22:51.587368] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:00.158 [2024-07-13 08:22:51.673619] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:00.158 [2024-07-13 08:22:51.673686] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:00.158 [2024-07-13 08:22:51.673703] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:00.158 [2024-07-13 08:22:51.673714] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:00.158 [2024-07-13 08:22:51.673737] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:00.158 [2024-07-13 08:22:51.673828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:00.159 [2024-07-13 08:22:51.673892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:00.159 [2024-07-13 08:22:51.673959] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:00.159 [2024-07-13 08:22:51.673962] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:00.159 08:22:51 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:00.159 ************************************ 00:36:00.159 START TEST spdk_target_abort 00:36:00.159 ************************************ 00:36:00.159 08:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:36:00.159 08:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:00.159 08:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:00.159 08:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:00.159 08:22:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:03.442 spdk_targetn1 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:03.442 [2024-07-13 08:22:54.676995] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:03.442 [2024-07-13 08:22:54.709225] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:03.442 08:22:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:03.442 EAL: No free 2048 kB hugepages reported on node 1 00:36:06.752 Initializing NVMe Controllers 00:36:06.752 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:06.752 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:06.752 Initialization complete. Launching workers. 00:36:06.752 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11378, failed: 0 00:36:06.752 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1318, failed to submit 10060 00:36:06.752 success 790, unsuccess 528, failed 0 00:36:06.752 08:22:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:06.752 08:22:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:06.752 EAL: No free 2048 kB hugepages reported on node 1 00:36:10.032 Initializing NVMe Controllers 00:36:10.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:10.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:10.032 Initialization complete. Launching workers. 00:36:10.032 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8662, failed: 0 00:36:10.032 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1204, failed to submit 7458 00:36:10.032 success 361, unsuccess 843, failed 0 00:36:10.032 08:23:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:10.032 08:23:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:10.032 EAL: No free 2048 kB hugepages reported on node 1 00:36:12.574 Initializing NVMe Controllers 00:36:12.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:12.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:12.574 Initialization complete. Launching workers. 00:36:12.574 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31272, failed: 0 00:36:12.574 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2776, failed to submit 28496 00:36:12.574 success 532, unsuccess 2244, failed 0 00:36:12.574 08:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:12.574 08:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.574 08:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:12.574 08:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:12.574 08:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:12.574 08:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:12.574 08:23:04 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:13.944 08:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:13.944 08:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2134972 00:36:13.944 08:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 2134972 ']' 00:36:13.944 08:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 2134972 00:36:13.944 08:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:36:13.944 08:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:13.944 08:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2134972 00:36:14.203 08:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:14.203 08:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:14.203 08:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2134972' 00:36:14.203 killing process with pid 2134972 00:36:14.203 08:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 2134972 00:36:14.203 08:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 2134972 00:36:14.203 00:36:14.203 real 0m14.098s 00:36:14.203 user 0m53.394s 00:36:14.203 sys 0m2.541s 00:36:14.203 08:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:14.203 08:23:05 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:14.203 ************************************ 00:36:14.203 END TEST spdk_target_abort 00:36:14.203 ************************************ 00:36:14.462 08:23:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:14.462 08:23:05 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:14.462 08:23:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:14.462 08:23:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:14.462 08:23:05 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:14.462 ************************************ 00:36:14.462 START TEST kernel_target_abort 00:36:14.462 ************************************ 00:36:14.462 08:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:36:14.462 08:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:14.462 08:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:14.462 08:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:14.462 08:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:14.462 08:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:14.462 08:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:14.462 08:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:14.462 08:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:14.462 08:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:14.462 08:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:14.462 08:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:14.462 08:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:14.462 08:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:14.462 08:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:14.462 08:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:14.462 08:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:14.462 08:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:14.462 08:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:14.462 08:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:14.462 08:23:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:14.462 08:23:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:14.462 08:23:06 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:15.397 Waiting for block devices as requested 00:36:15.397 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:15.655 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:15.655 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:15.655 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:15.655 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:15.913 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:15.913 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:15.913 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:15.913 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:16.173 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:16.173 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:16.173 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:16.173 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:16.431 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:16.431 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:16.431 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:16.431 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:16.705 No valid GPT data, bailing 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:16.705 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:16.963 00:36:16.963 Discovery Log Number of Records 2, Generation counter 2 00:36:16.963 =====Discovery Log Entry 0====== 00:36:16.963 trtype: tcp 00:36:16.963 adrfam: ipv4 00:36:16.963 subtype: current discovery subsystem 00:36:16.963 treq: not specified, sq flow control disable supported 00:36:16.963 portid: 1 00:36:16.963 trsvcid: 4420 00:36:16.963 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:16.963 traddr: 10.0.0.1 00:36:16.963 eflags: none 00:36:16.963 sectype: none 00:36:16.963 =====Discovery Log Entry 1====== 00:36:16.963 trtype: tcp 00:36:16.963 adrfam: ipv4 00:36:16.963 subtype: nvme subsystem 00:36:16.963 treq: not specified, sq flow control disable supported 00:36:16.963 portid: 1 00:36:16.963 trsvcid: 4420 00:36:16.963 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:16.963 traddr: 10.0.0.1 00:36:16.963 eflags: none 00:36:16.963 sectype: none 00:36:16.963 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:16.963 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:16.963 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:16.963 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:16.963 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:16.963 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:16.963 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:16.963 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:16.963 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:16.963 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:16.963 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:16.963 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:16.963 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:16.963 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:16.963 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:16.963 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:16.963 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:16.963 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:16.963 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:16.963 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:16.963 08:23:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:16.963 EAL: No free 2048 kB hugepages reported on node 1 00:36:20.254 Initializing NVMe Controllers 00:36:20.254 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:20.254 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:20.254 Initialization complete. Launching workers. 00:36:20.254 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32487, failed: 0 00:36:20.254 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32487, failed to submit 0 00:36:20.254 success 0, unsuccess 32487, failed 0 00:36:20.254 08:23:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:20.254 08:23:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:20.254 EAL: No free 2048 kB hugepages reported on node 1 00:36:23.532 Initializing NVMe Controllers 00:36:23.532 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:23.532 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:23.532 Initialization complete. Launching workers. 00:36:23.532 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65831, failed: 0 00:36:23.532 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16610, failed to submit 49221 00:36:23.532 success 0, unsuccess 16610, failed 0 00:36:23.532 08:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:23.532 08:23:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:23.532 EAL: No free 2048 kB hugepages reported on node 1 00:36:26.813 Initializing NVMe Controllers 00:36:26.813 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:26.813 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:26.813 Initialization complete. Launching workers. 00:36:26.813 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62815, failed: 0 00:36:26.813 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15698, failed to submit 47117 00:36:26.813 success 0, unsuccess 15698, failed 0 00:36:26.813 08:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:26.813 08:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:26.813 08:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:26.813 08:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:26.813 08:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:26.813 08:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:26.813 08:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:26.813 08:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:26.813 08:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:26.813 08:23:17 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:27.379 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:27.380 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:27.380 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:27.380 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:27.380 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:27.380 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:27.380 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:27.380 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:27.380 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:27.380 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:27.380 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:27.380 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:27.380 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:27.380 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:27.636 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:27.636 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:28.570 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:28.570 00:36:28.570 real 0m14.166s 00:36:28.570 user 0m5.242s 00:36:28.570 sys 0m3.261s 00:36:28.570 08:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:28.570 08:23:20 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:28.570 ************************************ 00:36:28.570 END TEST kernel_target_abort 00:36:28.570 ************************************ 00:36:28.570 08:23:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:28.570 08:23:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:28.570 08:23:20 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:28.570 08:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:28.570 08:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:28.570 08:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:28.570 08:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:28.570 08:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:28.570 08:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:28.570 rmmod nvme_tcp 00:36:28.570 rmmod nvme_fabrics 00:36:28.570 rmmod nvme_keyring 00:36:28.570 08:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:28.570 08:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:28.570 08:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:28.570 08:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2134972 ']' 00:36:28.570 08:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2134972 00:36:28.570 08:23:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 2134972 ']' 00:36:28.570 08:23:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 2134972 00:36:28.570 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (2134972) - No such process 00:36:28.570 08:23:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 2134972 is not found' 00:36:28.570 Process with pid 2134972 is not found 00:36:28.570 08:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:28.570 08:23:20 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:29.946 Waiting for block devices as requested 00:36:29.946 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:29.946 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:29.946 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:29.946 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:30.204 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:30.204 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:30.204 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:30.204 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:30.463 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:30.463 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:30.463 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:30.463 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:30.721 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:30.721 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:30.721 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:30.721 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:30.980 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:30.980 08:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:30.980 08:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:30.980 08:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:30.980 08:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:30.980 08:23:22 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:30.980 08:23:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:30.980 08:23:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:33.509 08:23:24 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:33.509 00:36:33.509 real 0m37.545s 00:36:33.509 user 1m0.673s 00:36:33.509 sys 0m9.125s 00:36:33.509 08:23:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:33.509 08:23:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:33.509 ************************************ 00:36:33.509 END TEST nvmf_abort_qd_sizes 00:36:33.509 ************************************ 00:36:33.509 08:23:24 -- common/autotest_common.sh@1142 -- # return 0 00:36:33.509 08:23:24 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:33.509 08:23:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:33.509 08:23:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:33.509 08:23:24 -- common/autotest_common.sh@10 -- # set +x 00:36:33.509 ************************************ 00:36:33.509 START TEST keyring_file 00:36:33.509 ************************************ 00:36:33.509 08:23:24 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:33.509 * Looking for test storage... 00:36:33.509 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:33.509 08:23:24 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:33.509 08:23:24 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:33.509 08:23:24 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:33.509 08:23:24 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:33.509 08:23:24 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:33.509 08:23:24 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:33.509 08:23:24 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:33.509 08:23:24 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:33.509 08:23:24 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:33.509 08:23:24 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:33.509 08:23:24 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:33.509 08:23:24 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:33.509 08:23:24 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:33.509 08:23:24 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:33.509 08:23:24 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:33.509 08:23:24 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:33.509 08:23:24 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:33.509 08:23:24 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:33.509 08:23:24 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:33.509 08:23:24 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:33.509 08:23:24 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:33.509 08:23:24 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:33.509 08:23:24 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:33.509 08:23:24 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.509 08:23:24 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.509 08:23:24 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.509 08:23:24 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:33.510 08:23:24 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:33.510 08:23:24 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:33.510 08:23:24 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:33.510 08:23:24 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:33.510 08:23:24 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:33.510 08:23:24 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:33.510 08:23:24 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:33.510 08:23:24 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:33.510 08:23:24 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:33.510 08:23:24 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:33.510 08:23:24 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:33.510 08:23:24 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:33.510 08:23:24 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:33.510 08:23:24 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:33.510 08:23:24 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:33.510 08:23:24 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:33.510 08:23:24 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:33.510 08:23:24 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:33.510 08:23:24 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:33.510 08:23:24 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:33.510 08:23:24 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:33.510 08:23:24 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:33.510 08:23:24 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IZrydcwpgi 00:36:33.510 08:23:24 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:33.510 08:23:24 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:33.510 08:23:24 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:33.510 08:23:24 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:33.510 08:23:24 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:33.510 08:23:24 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:33.510 08:23:24 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:33.510 08:23:24 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IZrydcwpgi 00:36:33.510 08:23:24 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IZrydcwpgi 00:36:33.510 08:23:24 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.IZrydcwpgi 00:36:33.510 08:23:24 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:33.510 08:23:24 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:33.510 08:23:24 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:33.510 08:23:24 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:33.510 08:23:24 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:33.510 08:23:24 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:33.510 08:23:24 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.msT3ctUPsM 00:36:33.510 08:23:24 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:33.510 08:23:24 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:33.510 08:23:24 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:33.510 08:23:24 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:33.510 08:23:24 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:33.510 08:23:24 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:33.510 08:23:24 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:33.510 08:23:24 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.msT3ctUPsM 00:36:33.510 08:23:24 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.msT3ctUPsM 00:36:33.510 08:23:24 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.msT3ctUPsM 00:36:33.510 08:23:24 keyring_file -- keyring/file.sh@30 -- # tgtpid=2140720 00:36:33.510 08:23:24 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:33.510 08:23:24 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2140720 00:36:33.510 08:23:24 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2140720 ']' 00:36:33.510 08:23:24 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:33.510 08:23:24 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:33.510 08:23:24 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:33.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:33.510 08:23:24 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:33.510 08:23:24 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:33.510 [2024-07-13 08:23:24.870517] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:36:33.510 [2024-07-13 08:23:24.870606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140720 ] 00:36:33.510 EAL: No free 2048 kB hugepages reported on node 1 00:36:33.510 [2024-07-13 08:23:24.929340] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:33.510 [2024-07-13 08:23:25.014607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:33.768 08:23:25 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:33.768 08:23:25 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:33.768 08:23:25 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:33.768 08:23:25 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.768 08:23:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:33.768 [2024-07-13 08:23:25.268215] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:33.768 null0 00:36:33.768 [2024-07-13 08:23:25.300257] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:33.768 [2024-07-13 08:23:25.300746] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:33.768 [2024-07-13 08:23:25.308295] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:33.768 08:23:25 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:33.768 08:23:25 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:33.768 08:23:25 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:33.768 08:23:25 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:33.769 08:23:25 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:33.769 08:23:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:33.769 08:23:25 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:33.769 08:23:25 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:33.769 08:23:25 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:33.769 08:23:25 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:33.769 08:23:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:33.769 [2024-07-13 08:23:25.320290] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:33.769 request: 00:36:33.769 { 00:36:33.769 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:33.769 "secure_channel": false, 00:36:33.769 "listen_address": { 00:36:33.769 "trtype": "tcp", 00:36:33.769 "traddr": "127.0.0.1", 00:36:33.769 "trsvcid": "4420" 00:36:33.769 }, 00:36:33.769 "method": "nvmf_subsystem_add_listener", 00:36:33.769 "req_id": 1 00:36:33.769 } 00:36:33.769 Got JSON-RPC error response 00:36:33.769 response: 00:36:33.769 { 00:36:33.769 "code": -32602, 00:36:33.769 "message": "Invalid parameters" 00:36:33.769 } 00:36:33.769 08:23:25 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:33.769 08:23:25 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:33.769 08:23:25 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:33.769 08:23:25 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:33.769 08:23:25 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:33.769 08:23:25 keyring_file -- keyring/file.sh@46 -- # bperfpid=2140730 00:36:33.769 08:23:25 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:33.769 08:23:25 keyring_file -- keyring/file.sh@48 -- # waitforlisten 2140730 /var/tmp/bperf.sock 00:36:33.769 08:23:25 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2140730 ']' 00:36:33.769 08:23:25 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:33.769 08:23:25 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:33.769 08:23:25 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:33.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:33.769 08:23:25 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:33.769 08:23:25 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:33.769 [2024-07-13 08:23:25.368048] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:36:33.769 [2024-07-13 08:23:25.368127] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140730 ] 00:36:33.769 EAL: No free 2048 kB hugepages reported on node 1 00:36:33.769 [2024-07-13 08:23:25.428508] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:34.027 [2024-07-13 08:23:25.520751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:34.027 08:23:25 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:34.027 08:23:25 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:34.027 08:23:25 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IZrydcwpgi 00:36:34.027 08:23:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IZrydcwpgi 00:36:34.285 08:23:25 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.msT3ctUPsM 00:36:34.285 08:23:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.msT3ctUPsM 00:36:34.543 08:23:26 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:34.543 08:23:26 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:34.543 08:23:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:34.543 08:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:34.543 08:23:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:34.800 08:23:26 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.IZrydcwpgi == \/\t\m\p\/\t\m\p\.\I\Z\r\y\d\c\w\p\g\i ]] 00:36:34.800 08:23:26 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:34.800 08:23:26 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:34.800 08:23:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:34.800 08:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:34.800 08:23:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:35.058 08:23:26 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.msT3ctUPsM == \/\t\m\p\/\t\m\p\.\m\s\T\3\c\t\U\P\s\M ]] 00:36:35.058 08:23:26 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:35.058 08:23:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:35.058 08:23:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:35.058 08:23:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.058 08:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:35.058 08:23:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:35.316 08:23:26 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:35.316 08:23:26 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:35.316 08:23:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:35.316 08:23:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:35.316 08:23:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.316 08:23:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:35.316 08:23:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:35.574 08:23:27 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:35.574 08:23:27 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:35.574 08:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:35.832 [2024-07-13 08:23:27.372452] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:35.832 nvme0n1 00:36:35.832 08:23:27 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:36:35.832 08:23:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:35.832 08:23:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:35.832 08:23:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:35.832 08:23:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:35.832 08:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:36.093 08:23:27 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:36:36.093 08:23:27 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:36:36.093 08:23:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:36.093 08:23:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:36.093 08:23:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:36.093 08:23:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:36.093 08:23:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:36.367 08:23:27 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:36:36.367 08:23:27 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:36.367 Running I/O for 1 seconds... 00:36:37.740 00:36:37.740 Latency(us) 00:36:37.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:37.740 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:37.740 nvme0n1 : 1.03 4572.24 17.86 0.00 0.00 27656.21 4029.25 31068.92 00:36:37.740 =================================================================================================================== 00:36:37.740 Total : 4572.24 17.86 0.00 0.00 27656.21 4029.25 31068.92 00:36:37.740 0 00:36:37.740 08:23:29 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:37.740 08:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:37.740 08:23:29 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:36:37.740 08:23:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:37.740 08:23:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:37.740 08:23:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:37.740 08:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:37.740 08:23:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:37.997 08:23:29 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:36:37.997 08:23:29 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:36:37.997 08:23:29 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:37.997 08:23:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:37.997 08:23:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:37.997 08:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:37.997 08:23:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:38.254 08:23:29 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:38.254 08:23:29 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:38.254 08:23:29 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:38.254 08:23:29 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:38.254 08:23:29 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:38.254 08:23:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:38.254 08:23:29 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:38.254 08:23:29 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:38.254 08:23:29 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:38.254 08:23:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:38.511 [2024-07-13 08:23:30.099441] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:38.511 [2024-07-13 08:23:30.099628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19bd710 (107): Transport endpoint is not connected 00:36:38.511 [2024-07-13 08:23:30.100618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19bd710 (9): Bad file descriptor 00:36:38.511 [2024-07-13 08:23:30.101616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:38.511 [2024-07-13 08:23:30.101638] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:38.511 [2024-07-13 08:23:30.101654] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:38.511 request: 00:36:38.511 { 00:36:38.511 "name": "nvme0", 00:36:38.511 "trtype": "tcp", 00:36:38.511 "traddr": "127.0.0.1", 00:36:38.511 "adrfam": "ipv4", 00:36:38.511 "trsvcid": "4420", 00:36:38.511 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:38.511 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:38.511 "prchk_reftag": false, 00:36:38.511 "prchk_guard": false, 00:36:38.511 "hdgst": false, 00:36:38.511 "ddgst": false, 00:36:38.511 "psk": "key1", 00:36:38.511 "method": "bdev_nvme_attach_controller", 00:36:38.511 "req_id": 1 00:36:38.511 } 00:36:38.511 Got JSON-RPC error response 00:36:38.511 response: 00:36:38.511 { 00:36:38.511 "code": -5, 00:36:38.511 "message": "Input/output error" 00:36:38.511 } 00:36:38.511 08:23:30 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:38.511 08:23:30 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:38.511 08:23:30 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:38.511 08:23:30 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:38.511 08:23:30 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:36:38.511 08:23:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:38.511 08:23:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:38.511 08:23:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:38.511 08:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:38.511 08:23:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:38.768 08:23:30 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:36:38.768 08:23:30 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:36:38.768 08:23:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:38.768 08:23:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:38.768 08:23:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:38.768 08:23:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:38.768 08:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:39.025 08:23:30 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:39.025 08:23:30 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:36:39.025 08:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:39.282 08:23:30 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:36:39.282 08:23:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:39.540 08:23:31 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:36:39.540 08:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:39.540 08:23:31 keyring_file -- keyring/file.sh@77 -- # jq length 00:36:39.798 08:23:31 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:36:39.798 08:23:31 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.IZrydcwpgi 00:36:39.798 08:23:31 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.IZrydcwpgi 00:36:39.798 08:23:31 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:39.798 08:23:31 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.IZrydcwpgi 00:36:39.798 08:23:31 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:39.798 08:23:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:39.798 08:23:31 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:39.798 08:23:31 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:39.798 08:23:31 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IZrydcwpgi 00:36:39.798 08:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IZrydcwpgi 00:36:40.056 [2024-07-13 08:23:31.610457] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.IZrydcwpgi': 0100660 00:36:40.056 [2024-07-13 08:23:31.610494] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:40.056 request: 00:36:40.056 { 00:36:40.056 "name": "key0", 00:36:40.056 "path": "/tmp/tmp.IZrydcwpgi", 00:36:40.056 "method": "keyring_file_add_key", 00:36:40.056 "req_id": 1 00:36:40.056 } 00:36:40.056 Got JSON-RPC error response 00:36:40.056 response: 00:36:40.056 { 00:36:40.056 "code": -1, 00:36:40.056 "message": "Operation not permitted" 00:36:40.056 } 00:36:40.056 08:23:31 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:40.056 08:23:31 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:40.056 08:23:31 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:40.056 08:23:31 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:40.056 08:23:31 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.IZrydcwpgi 00:36:40.056 08:23:31 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.IZrydcwpgi 00:36:40.056 08:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.IZrydcwpgi 00:36:40.314 08:23:31 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.IZrydcwpgi 00:36:40.314 08:23:31 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:36:40.314 08:23:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:40.314 08:23:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:40.314 08:23:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:40.314 08:23:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:40.314 08:23:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:40.572 08:23:32 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:36:40.572 08:23:32 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:40.572 08:23:32 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:40.572 08:23:32 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:40.572 08:23:32 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:40.572 08:23:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:40.572 08:23:32 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:40.572 08:23:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:40.572 08:23:32 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:40.572 08:23:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:40.829 [2024-07-13 08:23:32.356529] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.IZrydcwpgi': No such file or directory 00:36:40.829 [2024-07-13 08:23:32.356566] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:40.829 [2024-07-13 08:23:32.356598] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:40.829 [2024-07-13 08:23:32.356612] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:40.829 [2024-07-13 08:23:32.356626] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:40.829 request: 00:36:40.829 { 00:36:40.829 "name": "nvme0", 00:36:40.829 "trtype": "tcp", 00:36:40.829 "traddr": "127.0.0.1", 00:36:40.829 "adrfam": "ipv4", 00:36:40.829 "trsvcid": "4420", 00:36:40.829 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:40.829 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:40.829 "prchk_reftag": false, 00:36:40.829 "prchk_guard": false, 00:36:40.829 "hdgst": false, 00:36:40.829 "ddgst": false, 00:36:40.829 "psk": "key0", 00:36:40.829 "method": "bdev_nvme_attach_controller", 00:36:40.829 "req_id": 1 00:36:40.829 } 00:36:40.829 Got JSON-RPC error response 00:36:40.829 response: 00:36:40.829 { 00:36:40.829 "code": -19, 00:36:40.829 "message": "No such device" 00:36:40.829 } 00:36:40.829 08:23:32 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:40.829 08:23:32 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:40.829 08:23:32 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:40.829 08:23:32 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:40.829 08:23:32 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:36:40.829 08:23:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:41.086 08:23:32 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:41.086 08:23:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:41.086 08:23:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:41.086 08:23:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:41.086 08:23:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:41.086 08:23:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:41.086 08:23:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.4IN1GoQSjl 00:36:41.086 08:23:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:41.086 08:23:32 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:41.086 08:23:32 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:41.086 08:23:32 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:41.086 08:23:32 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:41.086 08:23:32 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:41.086 08:23:32 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:41.086 08:23:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.4IN1GoQSjl 00:36:41.086 08:23:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.4IN1GoQSjl 00:36:41.086 08:23:32 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.4IN1GoQSjl 00:36:41.086 08:23:32 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4IN1GoQSjl 00:36:41.086 08:23:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4IN1GoQSjl 00:36:41.343 08:23:32 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:41.343 08:23:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:41.599 nvme0n1 00:36:41.599 08:23:33 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:36:41.599 08:23:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:41.599 08:23:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:41.599 08:23:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:41.599 08:23:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:41.599 08:23:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:41.857 08:23:33 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:36:41.857 08:23:33 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:36:41.857 08:23:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:42.115 08:23:33 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:36:42.115 08:23:33 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:36:42.115 08:23:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:42.115 08:23:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:42.115 08:23:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:42.373 08:23:34 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:36:42.373 08:23:34 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:36:42.373 08:23:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:42.373 08:23:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:42.373 08:23:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:42.373 08:23:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:42.373 08:23:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:42.631 08:23:34 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:36:42.631 08:23:34 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:42.631 08:23:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:42.889 08:23:34 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:36:42.889 08:23:34 keyring_file -- keyring/file.sh@104 -- # jq length 00:36:42.889 08:23:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:43.146 08:23:34 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:36:43.146 08:23:34 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.4IN1GoQSjl 00:36:43.146 08:23:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.4IN1GoQSjl 00:36:43.404 08:23:35 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.msT3ctUPsM 00:36:43.404 08:23:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.msT3ctUPsM 00:36:43.662 08:23:35 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:43.662 08:23:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:43.920 nvme0n1 00:36:43.920 08:23:35 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:36:43.920 08:23:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:44.179 08:23:35 keyring_file -- keyring/file.sh@112 -- # config='{ 00:36:44.179 "subsystems": [ 00:36:44.179 { 00:36:44.179 "subsystem": "keyring", 00:36:44.179 "config": [ 00:36:44.179 { 00:36:44.179 "method": "keyring_file_add_key", 00:36:44.179 "params": { 00:36:44.179 "name": "key0", 00:36:44.179 "path": "/tmp/tmp.4IN1GoQSjl" 00:36:44.179 } 00:36:44.179 }, 00:36:44.179 { 00:36:44.179 "method": "keyring_file_add_key", 00:36:44.179 "params": { 00:36:44.179 "name": "key1", 00:36:44.179 "path": "/tmp/tmp.msT3ctUPsM" 00:36:44.179 } 00:36:44.179 } 00:36:44.179 ] 00:36:44.179 }, 00:36:44.179 { 00:36:44.179 "subsystem": "iobuf", 00:36:44.179 "config": [ 00:36:44.179 { 00:36:44.179 "method": "iobuf_set_options", 00:36:44.179 "params": { 00:36:44.179 "small_pool_count": 8192, 00:36:44.179 "large_pool_count": 1024, 00:36:44.179 "small_bufsize": 8192, 00:36:44.179 "large_bufsize": 135168 00:36:44.179 } 00:36:44.179 } 00:36:44.179 ] 00:36:44.179 }, 00:36:44.179 { 00:36:44.179 "subsystem": "sock", 00:36:44.179 "config": [ 00:36:44.179 { 00:36:44.179 "method": "sock_set_default_impl", 00:36:44.179 "params": { 00:36:44.179 "impl_name": "posix" 00:36:44.179 } 00:36:44.179 }, 00:36:44.179 { 00:36:44.179 "method": "sock_impl_set_options", 00:36:44.179 "params": { 00:36:44.179 "impl_name": "ssl", 00:36:44.179 "recv_buf_size": 4096, 00:36:44.179 "send_buf_size": 4096, 00:36:44.179 "enable_recv_pipe": true, 00:36:44.179 "enable_quickack": false, 00:36:44.179 "enable_placement_id": 0, 00:36:44.179 "enable_zerocopy_send_server": true, 00:36:44.179 "enable_zerocopy_send_client": false, 00:36:44.179 "zerocopy_threshold": 0, 00:36:44.179 "tls_version": 0, 00:36:44.179 "enable_ktls": false 00:36:44.179 } 00:36:44.179 }, 00:36:44.179 { 00:36:44.179 "method": "sock_impl_set_options", 00:36:44.179 "params": { 00:36:44.179 "impl_name": "posix", 00:36:44.179 "recv_buf_size": 2097152, 00:36:44.179 "send_buf_size": 2097152, 00:36:44.179 "enable_recv_pipe": true, 00:36:44.179 "enable_quickack": false, 00:36:44.179 "enable_placement_id": 0, 00:36:44.179 "enable_zerocopy_send_server": true, 00:36:44.179 "enable_zerocopy_send_client": false, 00:36:44.179 "zerocopy_threshold": 0, 00:36:44.179 "tls_version": 0, 00:36:44.179 "enable_ktls": false 00:36:44.179 } 00:36:44.179 } 00:36:44.179 ] 00:36:44.179 }, 00:36:44.179 { 00:36:44.179 "subsystem": "vmd", 00:36:44.179 "config": [] 00:36:44.179 }, 00:36:44.179 { 00:36:44.179 "subsystem": "accel", 00:36:44.179 "config": [ 00:36:44.179 { 00:36:44.179 "method": "accel_set_options", 00:36:44.179 "params": { 00:36:44.179 "small_cache_size": 128, 00:36:44.179 "large_cache_size": 16, 00:36:44.179 "task_count": 2048, 00:36:44.179 "sequence_count": 2048, 00:36:44.179 "buf_count": 2048 00:36:44.179 } 00:36:44.179 } 00:36:44.179 ] 00:36:44.179 }, 00:36:44.179 { 00:36:44.179 "subsystem": "bdev", 00:36:44.179 "config": [ 00:36:44.179 { 00:36:44.179 "method": "bdev_set_options", 00:36:44.179 "params": { 00:36:44.179 "bdev_io_pool_size": 65535, 00:36:44.179 "bdev_io_cache_size": 256, 00:36:44.179 "bdev_auto_examine": true, 00:36:44.179 "iobuf_small_cache_size": 128, 00:36:44.179 "iobuf_large_cache_size": 16 00:36:44.179 } 00:36:44.179 }, 00:36:44.179 { 00:36:44.179 "method": "bdev_raid_set_options", 00:36:44.179 "params": { 00:36:44.179 "process_window_size_kb": 1024 00:36:44.179 } 00:36:44.179 }, 00:36:44.179 { 00:36:44.179 "method": "bdev_iscsi_set_options", 00:36:44.179 "params": { 00:36:44.179 "timeout_sec": 30 00:36:44.179 } 00:36:44.179 }, 00:36:44.179 { 00:36:44.179 "method": "bdev_nvme_set_options", 00:36:44.179 "params": { 00:36:44.179 "action_on_timeout": "none", 00:36:44.179 "timeout_us": 0, 00:36:44.179 "timeout_admin_us": 0, 00:36:44.179 "keep_alive_timeout_ms": 10000, 00:36:44.179 "arbitration_burst": 0, 00:36:44.179 "low_priority_weight": 0, 00:36:44.179 "medium_priority_weight": 0, 00:36:44.179 "high_priority_weight": 0, 00:36:44.179 "nvme_adminq_poll_period_us": 10000, 00:36:44.179 "nvme_ioq_poll_period_us": 0, 00:36:44.179 "io_queue_requests": 512, 00:36:44.179 "delay_cmd_submit": true, 00:36:44.179 "transport_retry_count": 4, 00:36:44.179 "bdev_retry_count": 3, 00:36:44.179 "transport_ack_timeout": 0, 00:36:44.179 "ctrlr_loss_timeout_sec": 0, 00:36:44.179 "reconnect_delay_sec": 0, 00:36:44.179 "fast_io_fail_timeout_sec": 0, 00:36:44.179 "disable_auto_failback": false, 00:36:44.179 "generate_uuids": false, 00:36:44.179 "transport_tos": 0, 00:36:44.179 "nvme_error_stat": false, 00:36:44.179 "rdma_srq_size": 0, 00:36:44.179 "io_path_stat": false, 00:36:44.179 "allow_accel_sequence": false, 00:36:44.179 "rdma_max_cq_size": 0, 00:36:44.179 "rdma_cm_event_timeout_ms": 0, 00:36:44.179 "dhchap_digests": [ 00:36:44.179 "sha256", 00:36:44.179 "sha384", 00:36:44.179 "sha512" 00:36:44.179 ], 00:36:44.179 "dhchap_dhgroups": [ 00:36:44.179 "null", 00:36:44.179 "ffdhe2048", 00:36:44.179 "ffdhe3072", 00:36:44.179 "ffdhe4096", 00:36:44.179 "ffdhe6144", 00:36:44.179 "ffdhe8192" 00:36:44.179 ] 00:36:44.179 } 00:36:44.180 }, 00:36:44.180 { 00:36:44.180 "method": "bdev_nvme_attach_controller", 00:36:44.180 "params": { 00:36:44.180 "name": "nvme0", 00:36:44.180 "trtype": "TCP", 00:36:44.180 "adrfam": "IPv4", 00:36:44.180 "traddr": "127.0.0.1", 00:36:44.180 "trsvcid": "4420", 00:36:44.180 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:44.180 "prchk_reftag": false, 00:36:44.180 "prchk_guard": false, 00:36:44.180 "ctrlr_loss_timeout_sec": 0, 00:36:44.180 "reconnect_delay_sec": 0, 00:36:44.180 "fast_io_fail_timeout_sec": 0, 00:36:44.180 "psk": "key0", 00:36:44.180 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:44.180 "hdgst": false, 00:36:44.180 "ddgst": false 00:36:44.180 } 00:36:44.180 }, 00:36:44.180 { 00:36:44.180 "method": "bdev_nvme_set_hotplug", 00:36:44.180 "params": { 00:36:44.180 "period_us": 100000, 00:36:44.180 "enable": false 00:36:44.180 } 00:36:44.180 }, 00:36:44.180 { 00:36:44.180 "method": "bdev_wait_for_examine" 00:36:44.180 } 00:36:44.180 ] 00:36:44.180 }, 00:36:44.180 { 00:36:44.180 "subsystem": "nbd", 00:36:44.180 "config": [] 00:36:44.180 } 00:36:44.180 ] 00:36:44.180 }' 00:36:44.180 08:23:35 keyring_file -- keyring/file.sh@114 -- # killprocess 2140730 00:36:44.180 08:23:35 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2140730 ']' 00:36:44.180 08:23:35 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2140730 00:36:44.180 08:23:35 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:44.180 08:23:35 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:44.180 08:23:35 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2140730 00:36:44.180 08:23:35 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:44.180 08:23:35 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:44.180 08:23:35 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2140730' 00:36:44.180 killing process with pid 2140730 00:36:44.180 08:23:35 keyring_file -- common/autotest_common.sh@967 -- # kill 2140730 00:36:44.180 Received shutdown signal, test time was about 1.000000 seconds 00:36:44.180 00:36:44.180 Latency(us) 00:36:44.180 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:44.180 =================================================================================================================== 00:36:44.180 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:44.180 08:23:35 keyring_file -- common/autotest_common.sh@972 -- # wait 2140730 00:36:44.439 08:23:36 keyring_file -- keyring/file.sh@117 -- # bperfpid=2142199 00:36:44.439 08:23:36 keyring_file -- keyring/file.sh@119 -- # waitforlisten 2142199 /var/tmp/bperf.sock 00:36:44.439 08:23:36 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 2142199 ']' 00:36:44.439 08:23:36 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:44.439 08:23:36 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:44.439 08:23:36 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:44.439 08:23:36 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:44.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:44.439 08:23:36 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:44.439 08:23:36 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:36:44.439 "subsystems": [ 00:36:44.439 { 00:36:44.439 "subsystem": "keyring", 00:36:44.439 "config": [ 00:36:44.439 { 00:36:44.439 "method": "keyring_file_add_key", 00:36:44.439 "params": { 00:36:44.439 "name": "key0", 00:36:44.439 "path": "/tmp/tmp.4IN1GoQSjl" 00:36:44.439 } 00:36:44.439 }, 00:36:44.439 { 00:36:44.439 "method": "keyring_file_add_key", 00:36:44.439 "params": { 00:36:44.439 "name": "key1", 00:36:44.439 "path": "/tmp/tmp.msT3ctUPsM" 00:36:44.439 } 00:36:44.439 } 00:36:44.439 ] 00:36:44.439 }, 00:36:44.439 { 00:36:44.439 "subsystem": "iobuf", 00:36:44.439 "config": [ 00:36:44.439 { 00:36:44.439 "method": "iobuf_set_options", 00:36:44.439 "params": { 00:36:44.439 "small_pool_count": 8192, 00:36:44.439 "large_pool_count": 1024, 00:36:44.439 "small_bufsize": 8192, 00:36:44.439 "large_bufsize": 135168 00:36:44.439 } 00:36:44.439 } 00:36:44.439 ] 00:36:44.439 }, 00:36:44.439 { 00:36:44.439 "subsystem": "sock", 00:36:44.439 "config": [ 00:36:44.439 { 00:36:44.439 "method": "sock_set_default_impl", 00:36:44.439 "params": { 00:36:44.439 "impl_name": "posix" 00:36:44.439 } 00:36:44.439 }, 00:36:44.439 { 00:36:44.439 "method": "sock_impl_set_options", 00:36:44.439 "params": { 00:36:44.439 "impl_name": "ssl", 00:36:44.439 "recv_buf_size": 4096, 00:36:44.439 "send_buf_size": 4096, 00:36:44.439 "enable_recv_pipe": true, 00:36:44.439 "enable_quickack": false, 00:36:44.439 "enable_placement_id": 0, 00:36:44.439 "enable_zerocopy_send_server": true, 00:36:44.439 "enable_zerocopy_send_client": false, 00:36:44.439 "zerocopy_threshold": 0, 00:36:44.439 "tls_version": 0, 00:36:44.439 "enable_ktls": false 00:36:44.439 } 00:36:44.439 }, 00:36:44.439 { 00:36:44.439 "method": "sock_impl_set_options", 00:36:44.439 "params": { 00:36:44.439 "impl_name": "posix", 00:36:44.439 "recv_buf_size": 2097152, 00:36:44.439 "send_buf_size": 2097152, 00:36:44.439 "enable_recv_pipe": true, 00:36:44.439 "enable_quickack": false, 00:36:44.439 "enable_placement_id": 0, 00:36:44.439 "enable_zerocopy_send_server": true, 00:36:44.439 "enable_zerocopy_send_client": false, 00:36:44.439 "zerocopy_threshold": 0, 00:36:44.439 "tls_version": 0, 00:36:44.439 "enable_ktls": false 00:36:44.439 } 00:36:44.439 } 00:36:44.439 ] 00:36:44.439 }, 00:36:44.439 { 00:36:44.439 "subsystem": "vmd", 00:36:44.439 "config": [] 00:36:44.439 }, 00:36:44.439 { 00:36:44.439 "subsystem": "accel", 00:36:44.439 "config": [ 00:36:44.439 { 00:36:44.439 "method": "accel_set_options", 00:36:44.439 "params": { 00:36:44.439 "small_cache_size": 128, 00:36:44.439 "large_cache_size": 16, 00:36:44.439 "task_count": 2048, 00:36:44.439 "sequence_count": 2048, 00:36:44.439 "buf_count": 2048 00:36:44.439 } 00:36:44.439 } 00:36:44.439 ] 00:36:44.439 }, 00:36:44.439 { 00:36:44.439 "subsystem": "bdev", 00:36:44.439 "config": [ 00:36:44.439 { 00:36:44.439 "method": "bdev_set_options", 00:36:44.439 "params": { 00:36:44.439 "bdev_io_pool_size": 65535, 00:36:44.439 "bdev_io_cache_size": 256, 00:36:44.439 "bdev_auto_examine": true, 00:36:44.439 "iobuf_small_cache_size": 128, 00:36:44.439 "iobuf_large_cache_size": 16 00:36:44.439 } 00:36:44.439 }, 00:36:44.439 { 00:36:44.439 "method": "bdev_raid_set_options", 00:36:44.439 "params": { 00:36:44.439 "process_window_size_kb": 1024 00:36:44.439 } 00:36:44.439 }, 00:36:44.439 { 00:36:44.439 "method": "bdev_iscsi_set_options", 00:36:44.439 "params": { 00:36:44.439 "timeout_sec": 30 00:36:44.439 } 00:36:44.439 }, 00:36:44.439 { 00:36:44.439 "method": "bdev_nvme_set_options", 00:36:44.439 "params": { 00:36:44.439 "action_on_timeout": "none", 00:36:44.439 "timeout_us": 0, 00:36:44.439 "timeout_admin_us": 0, 00:36:44.439 "keep_alive_timeout_ms": 10000, 00:36:44.439 "arbitration_burst": 0, 00:36:44.439 "low_priority_weight": 0, 00:36:44.439 "medium_priority_weight": 0, 00:36:44.439 "high_priority_weight": 0, 00:36:44.439 "nvme_adminq_poll_period_us": 10000, 00:36:44.439 "nvme_ioq_poll_period_us": 0, 00:36:44.439 "io_queue_requests": 512, 00:36:44.439 "delay_cmd_submit": true, 00:36:44.439 "transport_retry_count": 4, 00:36:44.439 "bdev_retry_count": 3, 00:36:44.439 "transport_ack_timeout": 0, 00:36:44.439 "ctrlr_loss_timeout_sec": 0, 00:36:44.439 "reconnect_delay_sec": 0, 00:36:44.439 "fast_io_fail_timeout_sec": 0, 00:36:44.439 "disable_auto_failback": false, 00:36:44.439 "generate_uuids": false, 00:36:44.439 "transport_tos": 0, 00:36:44.439 "nvme_error_stat": false, 00:36:44.439 "rdma_srq_size": 0, 00:36:44.439 "io_path_stat": false, 00:36:44.439 "allow_accel_sequence": false, 00:36:44.439 08:23:36 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:44.439 "rdma_max_cq_size": 0, 00:36:44.439 "rdma_cm_event_timeout_ms": 0, 00:36:44.439 "dhchap_digests": [ 00:36:44.439 "sha256", 00:36:44.439 "sha384", 00:36:44.439 "sha512" 00:36:44.439 ], 00:36:44.439 "dhchap_dhgroups": [ 00:36:44.439 "null", 00:36:44.439 "ffdhe2048", 00:36:44.439 "ffdhe3072", 00:36:44.439 "ffdhe4096", 00:36:44.439 "ffdhe6144", 00:36:44.439 "ffdhe8192" 00:36:44.439 ] 00:36:44.439 } 00:36:44.439 }, 00:36:44.439 { 00:36:44.439 "method": "bdev_nvme_attach_controller", 00:36:44.439 "params": { 00:36:44.439 "name": "nvme0", 00:36:44.439 "trtype": "TCP", 00:36:44.439 "adrfam": "IPv4", 00:36:44.439 "traddr": "127.0.0.1", 00:36:44.439 "trsvcid": "4420", 00:36:44.439 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:44.439 "prchk_reftag": false, 00:36:44.439 "prchk_guard": false, 00:36:44.439 "ctrlr_loss_timeout_sec": 0, 00:36:44.439 "reconnect_delay_sec": 0, 00:36:44.439 "fast_io_fail_timeout_sec": 0, 00:36:44.439 "psk": "key0", 00:36:44.439 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:44.439 "hdgst": false, 00:36:44.439 "ddgst": false 00:36:44.439 } 00:36:44.439 }, 00:36:44.439 { 00:36:44.439 "method": "bdev_nvme_set_hotplug", 00:36:44.439 "params": { 00:36:44.439 "period_us": 100000, 00:36:44.439 "enable": false 00:36:44.439 } 00:36:44.439 }, 00:36:44.439 { 00:36:44.439 "method": "bdev_wait_for_examine" 00:36:44.439 } 00:36:44.439 ] 00:36:44.439 }, 00:36:44.439 { 00:36:44.439 "subsystem": "nbd", 00:36:44.439 "config": [] 00:36:44.439 } 00:36:44.439 ] 00:36:44.439 }' 00:36:44.439 [2024-07-13 08:23:36.140668] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:36:44.439 [2024-07-13 08:23:36.140749] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2142199 ] 00:36:44.439 EAL: No free 2048 kB hugepages reported on node 1 00:36:44.698 [2024-07-13 08:23:36.201132] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:44.698 [2024-07-13 08:23:36.291633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:44.956 [2024-07-13 08:23:36.482702] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:45.522 08:23:37 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:45.522 08:23:37 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:45.522 08:23:37 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:36:45.522 08:23:37 keyring_file -- keyring/file.sh@120 -- # jq length 00:36:45.522 08:23:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.779 08:23:37 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:36:45.780 08:23:37 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:36:45.780 08:23:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:45.780 08:23:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:45.780 08:23:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:45.780 08:23:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.780 08:23:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:46.037 08:23:37 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:46.037 08:23:37 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:36:46.037 08:23:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:46.038 08:23:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:46.038 08:23:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:46.038 08:23:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.038 08:23:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:46.295 08:23:37 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:36:46.295 08:23:37 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:36:46.295 08:23:37 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:36:46.295 08:23:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:46.553 08:23:38 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:36:46.553 08:23:38 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:46.553 08:23:38 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.4IN1GoQSjl /tmp/tmp.msT3ctUPsM 00:36:46.553 08:23:38 keyring_file -- keyring/file.sh@20 -- # killprocess 2142199 00:36:46.553 08:23:38 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2142199 ']' 00:36:46.553 08:23:38 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2142199 00:36:46.553 08:23:38 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:46.553 08:23:38 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:46.553 08:23:38 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2142199 00:36:46.554 08:23:38 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:46.554 08:23:38 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:46.554 08:23:38 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2142199' 00:36:46.554 killing process with pid 2142199 00:36:46.554 08:23:38 keyring_file -- common/autotest_common.sh@967 -- # kill 2142199 00:36:46.554 Received shutdown signal, test time was about 1.000000 seconds 00:36:46.554 00:36:46.554 Latency(us) 00:36:46.554 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:46.554 =================================================================================================================== 00:36:46.554 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:46.554 08:23:38 keyring_file -- common/autotest_common.sh@972 -- # wait 2142199 00:36:46.811 08:23:38 keyring_file -- keyring/file.sh@21 -- # killprocess 2140720 00:36:46.811 08:23:38 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 2140720 ']' 00:36:46.811 08:23:38 keyring_file -- common/autotest_common.sh@952 -- # kill -0 2140720 00:36:46.811 08:23:38 keyring_file -- common/autotest_common.sh@953 -- # uname 00:36:46.811 08:23:38 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:46.811 08:23:38 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2140720 00:36:46.811 08:23:38 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:46.811 08:23:38 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:46.811 08:23:38 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2140720' 00:36:46.812 killing process with pid 2140720 00:36:46.812 08:23:38 keyring_file -- common/autotest_common.sh@967 -- # kill 2140720 00:36:46.812 [2024-07-13 08:23:38.381349] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:36:46.812 08:23:38 keyring_file -- common/autotest_common.sh@972 -- # wait 2140720 00:36:47.070 00:36:47.070 real 0m14.119s 00:36:47.070 user 0m35.068s 00:36:47.070 sys 0m3.218s 00:36:47.070 08:23:38 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:47.070 08:23:38 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:47.070 ************************************ 00:36:47.070 END TEST keyring_file 00:36:47.070 ************************************ 00:36:47.329 08:23:38 -- common/autotest_common.sh@1142 -- # return 0 00:36:47.329 08:23:38 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:36:47.329 08:23:38 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:47.329 08:23:38 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:47.329 08:23:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:47.329 08:23:38 -- common/autotest_common.sh@10 -- # set +x 00:36:47.329 ************************************ 00:36:47.329 START TEST keyring_linux 00:36:47.329 ************************************ 00:36:47.329 08:23:38 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:47.329 * Looking for test storage... 00:36:47.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:47.329 08:23:38 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:47.329 08:23:38 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:47.329 08:23:38 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:47.329 08:23:38 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:47.329 08:23:38 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:47.329 08:23:38 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.329 08:23:38 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.329 08:23:38 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.329 08:23:38 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:47.329 08:23:38 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:47.329 08:23:38 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:47.329 08:23:38 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:47.329 08:23:38 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:47.329 08:23:38 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:47.329 08:23:38 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:47.329 08:23:38 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:47.329 08:23:38 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:47.329 08:23:38 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:47.329 08:23:38 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:47.329 08:23:38 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:47.329 08:23:38 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:47.329 08:23:38 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:47.329 08:23:38 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:47.329 08:23:38 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:47.329 08:23:38 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:47.329 /tmp/:spdk-test:key0 00:36:47.329 08:23:38 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:47.329 08:23:38 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:47.329 08:23:38 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:47.329 08:23:38 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:47.329 08:23:38 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:47.329 08:23:38 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:47.329 08:23:38 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:36:47.329 08:23:38 keyring_linux -- nvmf/common.sh@705 -- # python - 00:36:47.329 08:23:39 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:47.329 08:23:39 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:47.329 /tmp/:spdk-test:key1 00:36:47.329 08:23:39 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2142562 00:36:47.329 08:23:39 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:47.329 08:23:39 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2142562 00:36:47.329 08:23:39 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2142562 ']' 00:36:47.329 08:23:39 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:47.329 08:23:39 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:47.329 08:23:39 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:47.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:47.329 08:23:39 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:47.329 08:23:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:47.588 [2024-07-13 08:23:39.063254] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:36:47.588 [2024-07-13 08:23:39.063345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2142562 ] 00:36:47.588 EAL: No free 2048 kB hugepages reported on node 1 00:36:47.588 [2024-07-13 08:23:39.130404] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:47.588 [2024-07-13 08:23:39.220188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:47.847 08:23:39 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:47.847 08:23:39 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:36:47.847 08:23:39 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:47.847 08:23:39 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:47.847 08:23:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:47.847 [2024-07-13 08:23:39.485926] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:47.847 null0 00:36:47.847 [2024-07-13 08:23:39.517973] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:47.847 [2024-07-13 08:23:39.518473] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:47.847 08:23:39 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:47.847 08:23:39 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:47.847 478485249 00:36:47.847 08:23:39 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:47.847 206035062 00:36:47.847 08:23:39 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2142691 00:36:47.847 08:23:39 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2142691 /var/tmp/bperf.sock 00:36:47.847 08:23:39 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:47.847 08:23:39 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 2142691 ']' 00:36:47.847 08:23:39 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:47.847 08:23:39 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:47.847 08:23:39 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:47.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:47.847 08:23:39 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:47.847 08:23:39 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:48.105 [2024-07-13 08:23:39.585331] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 22.11.4 initialization... 00:36:48.105 [2024-07-13 08:23:39.585404] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2142691 ] 00:36:48.105 EAL: No free 2048 kB hugepages reported on node 1 00:36:48.105 [2024-07-13 08:23:39.642397] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:48.105 [2024-07-13 08:23:39.727206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:48.105 08:23:39 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:48.105 08:23:39 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:36:48.105 08:23:39 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:48.105 08:23:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:48.362 08:23:40 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:48.362 08:23:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:48.927 08:23:40 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:48.927 08:23:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:48.927 [2024-07-13 08:23:40.617621] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:49.185 nvme0n1 00:36:49.185 08:23:40 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:49.185 08:23:40 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:49.185 08:23:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:49.185 08:23:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:49.185 08:23:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:49.185 08:23:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.442 08:23:40 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:49.442 08:23:40 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:49.442 08:23:40 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:49.442 08:23:40 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:49.442 08:23:40 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:49.442 08:23:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:49.442 08:23:40 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:49.700 08:23:41 keyring_linux -- keyring/linux.sh@25 -- # sn=478485249 00:36:49.700 08:23:41 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:49.700 08:23:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:49.700 08:23:41 keyring_linux -- keyring/linux.sh@26 -- # [[ 478485249 == \4\7\8\4\8\5\2\4\9 ]] 00:36:49.700 08:23:41 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 478485249 00:36:49.700 08:23:41 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:49.700 08:23:41 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:49.700 Running I/O for 1 seconds... 00:36:50.632 00:36:50.632 Latency(us) 00:36:50.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:50.632 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:50.632 nvme0n1 : 1.02 5061.02 19.77 0.00 0.00 25084.37 13010.11 42719.76 00:36:50.632 =================================================================================================================== 00:36:50.632 Total : 5061.02 19.77 0.00 0.00 25084.37 13010.11 42719.76 00:36:50.632 0 00:36:50.632 08:23:42 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:50.632 08:23:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:50.889 08:23:42 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:50.889 08:23:42 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:50.890 08:23:42 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:50.890 08:23:42 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:50.890 08:23:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.890 08:23:42 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:51.147 08:23:42 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:51.147 08:23:42 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:51.147 08:23:42 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:51.147 08:23:42 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:51.147 08:23:42 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:36:51.147 08:23:42 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:51.147 08:23:42 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:36:51.147 08:23:42 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:51.147 08:23:42 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:36:51.147 08:23:42 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:51.147 08:23:42 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:51.147 08:23:42 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:51.404 [2024-07-13 08:23:43.100409] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:51.404 [2024-07-13 08:23:43.101283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257b680 (107): Transport endpoint is not connected 00:36:51.404 [2024-07-13 08:23:43.102273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x257b680 (9): Bad file descriptor 00:36:51.405 [2024-07-13 08:23:43.103271] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:36:51.405 [2024-07-13 08:23:43.103295] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:51.405 [2024-07-13 08:23:43.103311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:36:51.405 request: 00:36:51.405 { 00:36:51.405 "name": "nvme0", 00:36:51.405 "trtype": "tcp", 00:36:51.405 "traddr": "127.0.0.1", 00:36:51.405 "adrfam": "ipv4", 00:36:51.405 "trsvcid": "4420", 00:36:51.405 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:51.405 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:51.405 "prchk_reftag": false, 00:36:51.405 "prchk_guard": false, 00:36:51.405 "hdgst": false, 00:36:51.405 "ddgst": false, 00:36:51.405 "psk": ":spdk-test:key1", 00:36:51.405 "method": "bdev_nvme_attach_controller", 00:36:51.405 "req_id": 1 00:36:51.405 } 00:36:51.405 Got JSON-RPC error response 00:36:51.405 response: 00:36:51.405 { 00:36:51.405 "code": -5, 00:36:51.405 "message": "Input/output error" 00:36:51.405 } 00:36:51.405 08:23:43 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:36:51.405 08:23:43 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:51.405 08:23:43 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:51.405 08:23:43 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:51.405 08:23:43 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:51.405 08:23:43 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:51.405 08:23:43 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:51.405 08:23:43 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:51.405 08:23:43 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:51.405 08:23:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:51.405 08:23:43 keyring_linux -- keyring/linux.sh@33 -- # sn=478485249 00:36:51.405 08:23:43 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 478485249 00:36:51.405 1 links removed 00:36:51.405 08:23:43 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:51.405 08:23:43 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:51.405 08:23:43 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:51.405 08:23:43 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:51.405 08:23:43 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:51.405 08:23:43 keyring_linux -- keyring/linux.sh@33 -- # sn=206035062 00:36:51.405 08:23:43 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 206035062 00:36:51.405 1 links removed 00:36:51.663 08:23:43 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2142691 00:36:51.663 08:23:43 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2142691 ']' 00:36:51.663 08:23:43 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2142691 00:36:51.663 08:23:43 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:36:51.663 08:23:43 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:51.663 08:23:43 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2142691 00:36:51.663 08:23:43 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:36:51.663 08:23:43 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:36:51.663 08:23:43 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2142691' 00:36:51.663 killing process with pid 2142691 00:36:51.663 08:23:43 keyring_linux -- common/autotest_common.sh@967 -- # kill 2142691 00:36:51.663 Received shutdown signal, test time was about 1.000000 seconds 00:36:51.663 00:36:51.663 Latency(us) 00:36:51.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:51.663 =================================================================================================================== 00:36:51.664 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:51.664 08:23:43 keyring_linux -- common/autotest_common.sh@972 -- # wait 2142691 00:36:51.664 08:23:43 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2142562 00:36:51.664 08:23:43 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 2142562 ']' 00:36:51.664 08:23:43 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 2142562 00:36:51.664 08:23:43 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:36:51.664 08:23:43 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:51.664 08:23:43 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 2142562 00:36:51.922 08:23:43 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:51.922 08:23:43 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:51.922 08:23:43 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 2142562' 00:36:51.922 killing process with pid 2142562 00:36:51.922 08:23:43 keyring_linux -- common/autotest_common.sh@967 -- # kill 2142562 00:36:51.922 08:23:43 keyring_linux -- common/autotest_common.sh@972 -- # wait 2142562 00:36:52.178 00:36:52.178 real 0m4.963s 00:36:52.178 user 0m9.303s 00:36:52.178 sys 0m1.534s 00:36:52.178 08:23:43 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:52.178 08:23:43 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:52.178 ************************************ 00:36:52.178 END TEST keyring_linux 00:36:52.178 ************************************ 00:36:52.178 08:23:43 -- common/autotest_common.sh@1142 -- # return 0 00:36:52.178 08:23:43 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:36:52.178 08:23:43 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:36:52.178 08:23:43 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:36:52.178 08:23:43 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:36:52.178 08:23:43 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:36:52.178 08:23:43 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:36:52.178 08:23:43 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:36:52.178 08:23:43 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:36:52.178 08:23:43 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:36:52.178 08:23:43 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:36:52.178 08:23:43 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:36:52.178 08:23:43 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:36:52.178 08:23:43 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:36:52.178 08:23:43 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:36:52.178 08:23:43 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:36:52.178 08:23:43 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:36:52.178 08:23:43 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:36:52.178 08:23:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:52.178 08:23:43 -- common/autotest_common.sh@10 -- # set +x 00:36:52.178 08:23:43 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:36:52.178 08:23:43 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:36:52.178 08:23:43 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:36:52.178 08:23:43 -- common/autotest_common.sh@10 -- # set +x 00:36:54.076 INFO: APP EXITING 00:36:54.076 INFO: killing all VMs 00:36:54.076 INFO: killing vhost app 00:36:54.076 INFO: EXIT DONE 00:36:55.011 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:36:55.011 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:36:55.011 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:36:55.011 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:36:55.011 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:36:55.011 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:36:55.011 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:36:55.011 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:36:55.269 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:36:55.269 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:36:55.269 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:36:55.269 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:36:55.269 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:36:55.269 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:36:55.269 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:36:55.269 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:36:55.269 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:36:56.646 Cleaning 00:36:56.646 Removing: /var/run/dpdk/spdk0/config 00:36:56.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:56.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:56.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:56.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:56.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:56.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:56.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:56.646 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:56.646 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:56.646 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:56.646 Removing: /var/run/dpdk/spdk1/config 00:36:56.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:56.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:56.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:56.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:56.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:56.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:56.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:56.646 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:56.646 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:56.646 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:56.646 Removing: /var/run/dpdk/spdk1/mp_socket 00:36:56.646 Removing: /var/run/dpdk/spdk2/config 00:36:56.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:56.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:56.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:56.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:56.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:56.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:56.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:56.646 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:56.646 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:56.646 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:56.646 Removing: /var/run/dpdk/spdk3/config 00:36:56.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:56.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:56.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:56.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:56.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:56.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:56.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:56.646 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:56.646 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:56.646 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:56.646 Removing: /var/run/dpdk/spdk4/config 00:36:56.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:56.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:56.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:56.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:56.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:56.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:56.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:56.646 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:56.646 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:56.646 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:56.646 Removing: /dev/shm/bdev_svc_trace.1 00:36:56.646 Removing: /dev/shm/nvmf_trace.0 00:36:56.646 Removing: /dev/shm/spdk_tgt_trace.pid1823209 00:36:56.646 Removing: /var/run/dpdk/spdk0 00:36:56.646 Removing: /var/run/dpdk/spdk1 00:36:56.646 Removing: /var/run/dpdk/spdk2 00:36:56.646 Removing: /var/run/dpdk/spdk3 00:36:56.646 Removing: /var/run/dpdk/spdk4 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1821660 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1822395 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1823209 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1823645 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1824331 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1824471 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1825190 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1825205 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1825443 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1826634 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1827543 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1827819 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1828043 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1828245 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1828433 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1828590 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1828748 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1828928 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1829242 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1831588 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1831754 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1831916 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1831992 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1832346 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1832360 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1832782 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1832794 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1832993 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1833092 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1833254 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1833263 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1833752 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1833907 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1834098 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1834266 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1834295 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1834480 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1834637 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1834790 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1835076 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1835239 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1835396 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1835582 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1835825 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1835984 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1836135 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1836413 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1836573 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1836732 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1836883 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1837164 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1837318 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1837471 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1837724 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1837912 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1838070 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1838224 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1838416 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1838622 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1840773 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1893946 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1896439 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1903874 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1907117 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1909506 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1909911 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1913874 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1917585 00:36:56.646 Removing: /var/run/dpdk/spdk_pid1917669 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1918242 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1918898 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1919455 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1919965 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1919969 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1920226 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1920237 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1920364 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1920902 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1921553 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1922208 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1922615 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1922618 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1922763 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1923639 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1924454 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1929871 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1930093 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1933212 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1936910 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1938983 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1945356 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1950426 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1951731 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1952396 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1962587 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1964680 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1989837 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1992848 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1994417 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1995727 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1995862 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1996001 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1996017 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1996451 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1997762 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1998369 00:36:56.905 Removing: /var/run/dpdk/spdk_pid1998789 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2000403 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2000713 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2001268 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2003652 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2006908 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2010437 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2033953 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2036616 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2040481 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2041425 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2042506 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2045040 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2047289 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2051478 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2051488 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2054599 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2054997 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2055264 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2055528 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2055541 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2056608 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2057792 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2058970 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2060159 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2061443 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2062637 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2066325 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2066767 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2068046 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2068783 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2072371 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2074346 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2077748 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2081011 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2087799 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2092235 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2092238 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2104429 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2104833 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2105240 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2105769 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2106342 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2106746 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2107157 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2107566 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2110065 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2110321 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2113985 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2114160 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2115876 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2121406 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2121418 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2124214 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2125643 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2127097 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2127843 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2129240 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2130031 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2135299 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2135665 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2136048 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2137602 00:36:56.905 Removing: /var/run/dpdk/spdk_pid2137889 00:36:57.164 Removing: /var/run/dpdk/spdk_pid2138283 00:36:57.164 Removing: /var/run/dpdk/spdk_pid2140720 00:36:57.164 Removing: /var/run/dpdk/spdk_pid2140730 00:36:57.164 Removing: /var/run/dpdk/spdk_pid2142199 00:36:57.164 Removing: /var/run/dpdk/spdk_pid2142562 00:36:57.164 Removing: /var/run/dpdk/spdk_pid2142691 00:36:57.164 Clean 00:36:57.164 08:23:48 -- common/autotest_common.sh@1451 -- # return 0 00:36:57.164 08:23:48 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:36:57.164 08:23:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:57.164 08:23:48 -- common/autotest_common.sh@10 -- # set +x 00:36:57.164 08:23:48 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:36:57.164 08:23:48 -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:57.164 08:23:48 -- common/autotest_common.sh@10 -- # set +x 00:36:57.164 08:23:48 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:57.164 08:23:48 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:57.164 08:23:48 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:57.164 08:23:48 -- spdk/autotest.sh@391 -- # hash lcov 00:36:57.164 08:23:48 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:36:57.164 08:23:48 -- spdk/autotest.sh@393 -- # hostname 00:36:57.164 08:23:48 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:57.423 geninfo: WARNING: invalid characters removed from testname! 00:37:29.532 08:24:16 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:29.532 08:24:20 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:32.057 08:24:23 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:35.332 08:24:26 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:37.857 08:24:29 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:41.141 08:24:32 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:43.667 08:24:35 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:43.667 08:24:35 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:43.667 08:24:35 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:37:43.667 08:24:35 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:43.667 08:24:35 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:43.667 08:24:35 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.667 08:24:35 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.667 08:24:35 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.667 08:24:35 -- paths/export.sh@5 -- $ export PATH 00:37:43.667 08:24:35 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:43.667 08:24:35 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:37:43.667 08:24:35 -- common/autobuild_common.sh@444 -- $ date +%s 00:37:43.667 08:24:35 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720851875.XXXXXX 00:37:43.667 08:24:35 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720851875.FgKFA2 00:37:43.667 08:24:35 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:37:43.667 08:24:35 -- common/autobuild_common.sh@450 -- $ '[' -n v22.11.4 ']' 00:37:43.667 08:24:35 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:37:43.667 08:24:35 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:37:43.667 08:24:35 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:37:43.667 08:24:35 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:37:43.667 08:24:35 -- common/autobuild_common.sh@460 -- $ get_config_params 00:37:43.667 08:24:35 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:37:43.667 08:24:35 -- common/autotest_common.sh@10 -- $ set +x 00:37:43.667 08:24:35 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:37:43.667 08:24:35 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:37:43.667 08:24:35 -- pm/common@17 -- $ local monitor 00:37:43.667 08:24:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:43.667 08:24:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:43.667 08:24:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:43.667 08:24:35 -- pm/common@21 -- $ date +%s 00:37:43.667 08:24:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:43.667 08:24:35 -- pm/common@21 -- $ date +%s 00:37:43.667 08:24:35 -- pm/common@25 -- $ sleep 1 00:37:43.667 08:24:35 -- pm/common@21 -- $ date +%s 00:37:43.667 08:24:35 -- pm/common@21 -- $ date +%s 00:37:43.667 08:24:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720851875 00:37:43.667 08:24:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720851875 00:37:43.667 08:24:35 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720851875 00:37:43.667 08:24:35 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720851875 00:37:43.667 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720851875_collect-vmstat.pm.log 00:37:43.667 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720851875_collect-cpu-load.pm.log 00:37:43.668 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720851875_collect-cpu-temp.pm.log 00:37:43.668 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720851875_collect-bmc-pm.bmc.pm.log 00:37:44.604 08:24:36 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:37:44.604 08:24:36 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:37:44.604 08:24:36 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:44.604 08:24:36 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:37:44.604 08:24:36 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:37:44.604 08:24:36 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:37:44.604 08:24:36 -- spdk/autopackage.sh@19 -- $ timing_finish 00:37:44.604 08:24:36 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:44.604 08:24:36 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:37:44.604 08:24:36 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:44.863 08:24:36 -- spdk/autopackage.sh@20 -- $ exit 0 00:37:44.863 08:24:36 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:37:44.863 08:24:36 -- pm/common@29 -- $ signal_monitor_resources TERM 00:37:44.863 08:24:36 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:37:44.863 08:24:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:44.863 08:24:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:37:44.863 08:24:36 -- pm/common@44 -- $ pid=2154559 00:37:44.863 08:24:36 -- pm/common@50 -- $ kill -TERM 2154559 00:37:44.863 08:24:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:44.863 08:24:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:37:44.863 08:24:36 -- pm/common@44 -- $ pid=2154561 00:37:44.863 08:24:36 -- pm/common@50 -- $ kill -TERM 2154561 00:37:44.863 08:24:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:44.863 08:24:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:37:44.863 08:24:36 -- pm/common@44 -- $ pid=2154563 00:37:44.863 08:24:36 -- pm/common@50 -- $ kill -TERM 2154563 00:37:44.863 08:24:36 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:37:44.863 08:24:36 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:37:44.863 08:24:36 -- pm/common@44 -- $ pid=2154595 00:37:44.863 08:24:36 -- pm/common@50 -- $ sudo -E kill -TERM 2154595 00:37:44.863 + [[ -n 1717680 ]] 00:37:44.863 + sudo kill 1717680 00:37:44.876 [Pipeline] } 00:37:44.895 [Pipeline] // stage 00:37:44.900 [Pipeline] } 00:37:44.917 [Pipeline] // timeout 00:37:44.922 [Pipeline] } 00:37:44.940 [Pipeline] // catchError 00:37:44.945 [Pipeline] } 00:37:44.962 [Pipeline] // wrap 00:37:44.968 [Pipeline] } 00:37:44.983 [Pipeline] // catchError 00:37:44.992 [Pipeline] stage 00:37:44.994 [Pipeline] { (Epilogue) 00:37:45.008 [Pipeline] catchError 00:37:45.010 [Pipeline] { 00:37:45.023 [Pipeline] echo 00:37:45.025 Cleanup processes 00:37:45.031 [Pipeline] sh 00:37:45.321 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:45.321 2154694 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:37:45.321 2154824 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:45.335 [Pipeline] sh 00:37:45.619 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:45.619 ++ grep -v 'sudo pgrep' 00:37:45.619 ++ awk '{print $1}' 00:37:45.619 + sudo kill -9 2154694 00:37:45.631 [Pipeline] sh 00:37:45.915 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:37:55.908 [Pipeline] sh 00:37:56.193 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:37:56.193 Artifacts sizes are good 00:37:56.208 [Pipeline] archiveArtifacts 00:37:56.214 Archiving artifacts 00:37:56.458 [Pipeline] sh 00:37:56.739 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:56.754 [Pipeline] cleanWs 00:37:56.763 [WS-CLEANUP] Deleting project workspace... 00:37:56.763 [WS-CLEANUP] Deferred wipeout is used... 00:37:56.770 [WS-CLEANUP] done 00:37:56.772 [Pipeline] } 00:37:56.791 [Pipeline] // catchError 00:37:56.803 [Pipeline] sh 00:37:57.085 + logger -p user.info -t JENKINS-CI 00:37:57.094 [Pipeline] } 00:37:57.108 [Pipeline] // stage 00:37:57.113 [Pipeline] } 00:37:57.129 [Pipeline] // node 00:37:57.135 [Pipeline] End of Pipeline 00:37:57.170 Finished: SUCCESS